Hi, I have observed that there is a maximum limit to the number of PV or HVM virtual machines you can start before dom0 hangs or crashes. For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the 7th instance. Max I can start is 6 without crashing. For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start the 4th instance. Max I can start is 3 without crashing. I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on Intel DQ45CB motherboard. Are the above limits reasonable considering the hardware specifications of my computer? I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host operating system is Fedora 11 Linux x86-64. -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-08 11:46 UTC
[Xen-devel] Re: Max. PV and HVM Guests
Forgot to mention that each of my linux PV guest has 512 MB memory while each of my linux HVM guest has 1024 MB memory. -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Sun, Nov 8, 2009 at 7:45 PM, Mr. Teo En Ming (Zhang Enming) < space.time.universe@gmail.com> wrote:> Hi, > > I have observed that there is a maximum limit to the number of PV or HVM > virtual machines you can start before dom0 hangs or crashes. > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the 7th > instance. Max I can start is 6 without crashing. > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start the > 4th instance. Max I can start is 3 without crashing. > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on Intel > DQ45CB motherboard. > > Are the above limits reasonable considering the hardware specifications of > my computer? > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host operating > system is Fedora 11 Linux x86-64. > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: http://enmingteo.wordpress.com > My Youtube videos: http://www.youtube.com/user/enmingteo > Email: space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hello, The maximum is a direct function of the available memory, every new machine "eats" some memory and when there is no more memory ...... I had some servers (64bits, 2 quad core and 32Gb Ram) running 19 VM, 2003 server, 2000 Server,Fedora, Debian all in 32bits. In my own system I have tested with upto 10 VMs of 512M each without breaking the system. Regards JP Pozzi Le dimanche 08 novembre 2009 à 19:45 +0800, Mr. Teo En Ming (Zhang Enming) a écrit :> Hi, > > I have observed that there is a maximum limit to the number of PV or > HVM virtual machines you can start before dom0 hangs or crashes. > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the > 7th instance. Max I can start is 6 without crashing. > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start > the 4th instance. Max I can start is 3 without crashing. > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on > Intel DQ45CB motherboard. > > Are the above limits reasonable considering the hardware > specifications of my computer? > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host > operating system is Fedora 11 Linux x86-64. > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: http://enmingteo.wordpress.com > My Youtube videos: http://www.youtube.com/user/enmingteo > Email: space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Nov 08, 2009 at 07:45:01PM +0800, Mr. Teo En Ming (Zhang Enming) wrote:> Hi, > > I have observed that there is a maximum limit to the number of PV or HVM > virtual machines you can start before dom0 hangs or crashes. > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the 7th > instance. Max I can start is 6 without crashing. > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start the > 4th instance. Max I can start is 3 without crashing. > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on > Intel DQ45CB motherboard. > > Are the above limits reasonable considering the hardware specifications of > my computer? > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host > operating system is Fedora 11 Linux x86-64. >Dom0 or Xen shouldn''t crash for starting guests. Sounds like a bug somewhere. Any errors/tracebacks? Do you have serial console configured? -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sun, Nov 8, 2009 at 5:50 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Sun, Nov 08, 2009 at 07:45:01PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > Hi, > > > > I have observed that there is a maximum limit to the number of PV or > HVM > > virtual machines you can start before dom0 hangs or crashes. > > >Mr. Teo En Ming, I have a CentOS 5.3 x86_64 machine with 16GB of ram currently running 41 PV DomUs. Each DomU has 300 MB of ram. I have to admit weird things do happen if I decide to start them all at the same time but the machine never crashes. Grant _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 03:35 UTC
[Xen-users] Re: [Xen-devel] Max. PV and HVM Guests
Yes. I will try to see if there''s a traceback. -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Sun, Nov 8, 2009 at 9:50 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Sun, Nov 08, 2009 at 07:45:01PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > Hi, > > > > I have observed that there is a maximum limit to the number of PV or > HVM > > virtual machines you can start before dom0 hangs or crashes. > > > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the > 7th > > instance. Max I can start is 6 without crashing. > > > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start > the > > 4th instance. Max I can start is 3 without crashing. > > > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on > > Intel DQ45CB motherboard. > > > > Are the above limits reasonable considering the hardware > specifications of > > my computer? > > > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host > > operating system is Fedora 11 Linux x86-64. > > > > Dom0 or Xen shouldn''t crash for starting guests. > Sounds like a bug somewhere. > > Any errors/tracebacks? Do you have serial console configured? > > -- Pasi > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
There seems to also be a limit imposed by CPU context switching. Ie. Once you have enough VMs trying to grab a cpu core things come to a standstill as the cpu spends most of its time switching rather than processing. The most common bottleneck I find is disk performance but this depends hugely on what your VMs are doing. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Moi meme Sent: 08 November 2009 13:02 To: Mr. Teo En Ming (Zhang Enming) Cc: xen-devel@lists.xensource.com; xen-users@lists.xensource.com Subject: Re: [Xen-users] Max. PV and HVM Guests Hello, The maximum is a direct function of the available memory, every new machine "eats" some memory and when there is no more memory ...... I had some servers (64bits, 2 quad core and 32Gb Ram) running 19 VM, 2003 server, 2000 Server,Fedora, Debian all in 32bits. In my own system I have tested with upto 10 VMs of 512M each without breaking the system. Regards JP Pozzi Le dimanche 08 novembre 2009 à 19:45 +0800, Mr. Teo En Ming (Zhang Enming) a écrit :> Hi, > > I have observed that there is a maximum limit to the number of PV or > HVM virtual machines you can start before dom0 hangs or crashes. > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the > 7th instance. Max I can start is 6 without crashing. > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start > the 4th instance. Max I can start is 3 without crashing. > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on > Intel DQ45CB motherboard. > > Are the above limits reasonable considering the hardware > specifications of my computer? > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host > operating system is Fedora 11 Linux x86-64. > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: http://enmingteo.wordpress.com > My Youtube videos: http://www.youtube.com/user/enmingteo > Email: space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 10:52 UTC
Re: [Xen-users] Max. PV and HVM Guests
Hi, Please watch this 4-minute video at http://www.youtube.com/watch?v=LbLaPpwNAx4 I have only started 3 HVM Linux guests with 1 GB ram each. I can''t start the 4th HVM guest. If I attempt to start the 4th instance, it will crash dom0. Are there anything in the xm dmesg output that could explain the low limit to the number of VMs that I could start before dom0 becomes unresponsive? Thank you. -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 4:17 PM, Robert Dunkley <Robert@saq.co.uk> wrote:> There seems to also be a limit imposed by CPU context switching. Ie. Once > you have enough VMs trying to grab a cpu core things come to a standstill as > the cpu spends most of its time switching rather than processing. The most > common bottleneck I find is disk performance but this depends hugely on what > your VMs are doing. > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto: > xen-users-bounces@lists.xensource.com] On Behalf Of Moi meme > Sent: 08 November 2009 13:02 > To: Mr. Teo En Ming (Zhang Enming) > Cc: xen-devel@lists.xensource.com; xen-users@lists.xensource.com > Subject: Re: [Xen-users] Max. PV and HVM Guests > > Hello, > > The maximum is a direct function of the available memory, every new > machine "eats" some memory and when there is no more memory ...... > I had some servers (64bits, 2 quad core and 32Gb Ram) running 19 VM, > 2003 server, 2000 Server,Fedora, Debian all in 32bits. > In my own system I have tested with upto 10 VMs of 512M each without > breaking the system. > > Regards > > JP Pozzi > > Le dimanche 08 novembre 2009 à 19:45 +0800, Mr. Teo En Ming (Zhang > Enming) a écrit : > > Hi, > > > > I have observed that there is a maximum limit to the number of PV or > > HVM virtual machines you can start before dom0 hangs or crashes. > > > > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the > > 7th instance. Max I can start is 6 without crashing. > > > > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start > > the 4th instance. Max I can start is 3 without crashing. > > > > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on > > Intel DQ45CB motherboard. > > > > Are the above limits reasonable considering the hardware > > specifications of my computer? > > > > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels > > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host > > operating system is Fedora 11 Linux x86-64. > > > > -- > > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > > Engineering) > > Alma Maters: > > (1) Singapore Polytechnic > > (2) National University of Singapore > > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > > My Secondary Blog: http://enmingteo.wordpress.com > > My Youtube videos: http://www.youtube.com/user/enmingteo > > Email: space.time.universe@gmail.com > > Mobile Phone (Starhub Prepaid): +65-8369-2618 > > Street: Bedok Reservoir Road > > Country: Singapore > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > The SAQ Group > > Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ > SAQ is the trading name of SEMTEC Limited. Registered in England & Wales > Company Number: 06481952 > > http://www.saqnet.co.uk AS29219 > > SAQ Group Delivers high quality, honestly priced communication and I.T. > services to UK Business. > > Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : > Backups : Managed Networks : Remote Support. > > ISPA Member > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 11:53 UTC
Re: [Xen-users] Max. PV and HVM Guests
Could it be that my Intel Pentium Dual Core E6300 2.8 GHz processor is not powerful enough? Maybe need to upgrade to Intel Core 2 Quad? -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 6:52 PM, Mr. Teo En Ming (Zhang Enming) < space.time.universe@gmail.com> wrote:> Hi, > > Please watch this 4-minute video at > http://www.youtube.com/watch?v=LbLaPpwNAx4 > > I have only started 3 HVM Linux guests with 1 GB ram each. I can''t start > the 4th HVM guest. If I attempt to start the 4th instance, it will crash > dom0. > > Are there anything in the xm dmesg output that could explain the low limit > to the number of VMs that I could start before dom0 becomes unresponsive? > > Thank you. > > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: http://enmingteo.wordpress.com > My Youtube videos: http://www.youtube.com/user/enmingteo > Email: space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore > > On Mon, Nov 9, 2009 at 4:17 PM, Robert Dunkley <Robert@saq.co.uk> wrote: > >> There seems to also be a limit imposed by CPU context switching. Ie. Once >> you have enough VMs trying to grab a cpu core things come to a standstill as >> the cpu spends most of its time switching rather than processing. The most >> common bottleneck I find is disk performance but this depends hugely on what >> your VMs are doing. >> >> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com [mailto: >> xen-users-bounces@lists.xensource.com] On Behalf Of Moi meme >> Sent: 08 November 2009 13:02 >> To: Mr. Teo En Ming (Zhang Enming) >> Cc: xen-devel@lists.xensource.com; xen-users@lists.xensource.com >> Subject: Re: [Xen-users] Max. PV and HVM Guests >> >> Hello, >> >> The maximum is a direct function of the available memory, every new >> machine "eats" some memory and when there is no more memory ...... >> I had some servers (64bits, 2 quad core and 32Gb Ram) running 19 VM, >> 2003 server, 2000 Server,Fedora, Debian all in 32bits. >> In my own system I have tested with upto 10 VMs of 512M each without >> breaking the system. >> >> Regards >> >> JP Pozzi >> >> Le dimanche 08 novembre 2009 à 19:45 +0800, Mr. Teo En Ming (Zhang >> Enming) a écrit : >> > Hi, >> > >> > I have observed that there is a maximum limit to the number of PV or >> > HVM virtual machines you can start before dom0 hangs or crashes. >> > >> > For Fedora 11 Linux x86-64 PV guests, dom0 will crash when I start the >> > 7th instance. Max I can start is 6 without crashing. >> > >> > For CentOS 5.2 Linux x86-64 HVM guests, dom0 will crash when I start >> > the 4th instance. Max I can start is 3 without crashing. >> > >> > I have 6 GB of DDR2-800 with Intel Pentium Dual Core E6300 2.8 GHz on >> > Intel DQ45CB motherboard. >> > >> > Are the above limits reasonable considering the hardware >> > specifications of my computer? >> > >> > I am using Xen 3.5-unstable changeset 20143 with pv-ops dom0 kernels >> > 2.6.30-rc3, 2.6.31-rc6, 2.6.31.1, 2.6.31.4, and 2.6.31.5. My host >> > operating system is Fedora 11 Linux x86-64. >> > >> > -- >> > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical >> > Engineering) >> > Alma Maters: >> > (1) Singapore Polytechnic >> > (2) National University of Singapore >> > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com >> > My Secondary Blog: http://enmingteo.wordpress.com >> > My Youtube videos: http://www.youtube.com/user/enmingteo >> > Email: space.time.universe@gmail.com >> > Mobile Phone (Starhub Prepaid): +65-8369-2618 >> > Street: Bedok Reservoir Road >> > Country: Singapore >> > _______________________________________________ >> > Xen-users mailing list >> > Xen-users@lists.xensource.com >> > http://lists.xensource.com/xen-users >> >> >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >> The SAQ Group >> >> Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ >> SAQ is the trading name of SEMTEC Limited. Registered in England & Wales >> Company Number: 06481952 >> >> http://www.saqnet.co.uk AS29219 >> >> SAQ Group Delivers high quality, honestly priced communication and I.T. >> services to UK Business. >> >> Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : >> Backups : Managed Networks : Remote Support. >> >> ISPA Member >> >> > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Nov-09 11:54 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming (Zhang Enming) wrote:> Hi, > > Please watch this 4-minute video at > [1]http://www.youtube.com/watch?v=LbLaPpwNAx4 > > I have only started 3 HVM Linux guests with 1 GB ram each. I can''t start > the 4th HVM guest. If I attempt to start the 4th instance, it will crash > dom0. > > Are there anything in the xm dmesg output that could explain the low limit > to the number of VMs that I could start before dom0 becomes unresponsive? >Have you limited dom0 memory (by specifying dom0_mem=XMB option in grub.conf for xen.gz) ? What does "xm info" say about free memory before starting any guests? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 12:01 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
No, I didn''t limit dom0 memory in grub.conf. Here''s my xm info output after I have shutdown all the virtual machines. [root@fedora11-x86-64-host ~]# xm list Name ID Mem VCPUs State Time(s) Domain-0 0 2812 2 r----- 3242.5 [root@fedora11-x86-64-host ~]# xm info host : fedora11-x86-64-host release : 2.6.30-rc3-enming.teo-tip version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 machine : x86_64 nr_cpus : 2 nr_nodes : 1 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 2800 hw_caps : bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001:00000000 virt_caps : hvm hvm_directio total_memory : 6039 free_memory : 3124 node_to_cpu : node0:0-1 node_to_memory : node0:3124 xen_major : 3 xen_minor : 5 xen_extra : -unstable xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Tue Sep 01 11:34:31 2009 +0100 20143:a7de5bd776ca xen_commandline : iommu=1 cc_compiler : gcc version 4.4.1 20090725 (Red Hat 4.4.1-2) (GCC) cc_compile_by : root cc_compile_domain : (none) cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 xend_config_format : 4 -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > Hi, > > > > Please watch this 4-minute video at > > [1]http://www.youtube.com/watch?v=LbLaPpwNAx4 > > > > I have only started 3 HVM Linux guests with 1 GB ram each. I can''t > start > > the 4th HVM guest. If I attempt to start the 4th instance, it will > crash > > dom0. > > > > Are there anything in the xm dmesg output that could explain the low > limit > > to the number of VMs that I could start before dom0 becomes > unresponsive? > > > > Have you limited dom0 memory (by specifying dom0_mem=XMB option in > grub.conf for xen.gz) ? > > What does "xm info" say about free memory before starting any guests? > > -- Pasi > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Nov-09 12:05 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
On Mon, Nov 09, 2009 at 08:01:00PM +0800, Mr. Teo En Ming (Zhang Enming) wrote:> No, I didn''t limit dom0 memory in grub.conf. >You should. If dom0 has all the memory at boot time, you need to balloon down dom0 memory every time you create a new guest - this can (and will) cause problems with the dom0 linux kernel. Linux calculates some internal parameters/buffers/values based on the _boot time_ amount of memory. And when the amount of memory goes down to only a small fraction of that while creating new guests bad things can happen.. It still shouldn''t crash though.. I bet your problem will get fixed when you limit the dom0 memory to say dom0_mem=512M and reboot. -- Pasi> Here''s my xm info output after I have shutdown all the virtual machines. > > [root@fedora11-x86-64-host ~]# xm list > Name ID Mem VCPUs State > Time(s) > Domain-0 0 2812 2 r----- > 3242.5 > [root@fedora11-x86-64-host ~]# xm info > host : fedora11-x86-64-host > release : 2.6.30-rc3-enming.teo-tip > version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 > machine : x86_64 > nr_cpus : 2 > nr_nodes : 1 > cores_per_socket : 2 > threads_per_core : 1 > cpu_mhz : 2800 > hw_caps : > bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001:00000000 > virt_caps : hvm hvm_directio > total_memory : 6039 > free_memory : 3124 > node_to_cpu : node0:0-1 > node_to_memory : node0:3124 > xen_major : 3 > xen_minor : 5 > xen_extra : -unstable > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : Tue Sep 01 11:34:31 2009 +0100 20143:a7de5bd776ca > xen_commandline : iommu=1 > cc_compiler : gcc version 4.4.1 20090725 (Red Hat 4.4.1-2) > (GCC) > cc_compile_by : root > cc_compile_domain : (none) > cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 > xend_config_format : 4 > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: [1]http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: [2]http://enmingteo.wordpress.com > My Youtube videos: [3]http://www.youtube.com/user/enmingteo > Email: [4]space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore > > On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen <[5]pasik@iki.fi> wrote: > > On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > Hi, > > > > Please watch this 4-minute video at > > [1][6]http://www.youtube.com/watch?v=LbLaPpwNAx4 > > > > I have only started 3 HVM Linux guests with 1 GB ram each. I can''t > start > > the 4th HVM guest. If I attempt to start the 4th instance, it will > crash > > dom0. > > > > Are there anything in the xm dmesg output that could explain the > low limit > > to the number of VMs that I could start before dom0 becomes > unresponsive? > > > > Have you limited dom0 memory (by specifying dom0_mem=XMB option in > grub.conf for xen.gz) ? > > What does "xm info" say about free memory before starting any guests? > -- Pasi > > References > > Visible links > 1. http://teo-en-ming-aka-zhang-enming.blogspot.com/ > 2. http://enmingteo.wordpress.com/ > 3. http://www.youtube.com/user/enmingteo > 4. mailto:space.time.universe@gmail.com > 5. mailto:pasik@iki.fi > 6. http://www.youtube.com/watch?v=LbLaPpwNAx4_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 12:14 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
What is a good value for dom0_mem if I want to start X server and run GNOME? Will 512 MB be too little? -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 8:05 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Mon, Nov 09, 2009 at 08:01:00PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > No, I didn''t limit dom0 memory in grub.conf. > > > > You should. > > If dom0 has all the memory at boot time, you need to balloon down dom0 > memory every time you create a new guest - this can (and will) cause > problems with the dom0 linux kernel. > > Linux calculates some internal parameters/buffers/values based on the > _boot time_ amount of memory. And when the amount of memory goes down to > only a small fraction of that while creating new guests bad things can > happen.. > > It still shouldn''t crash though.. I bet your problem will get fixed when > you limit the dom0 memory to say dom0_mem=512M and reboot. > > -- Pasi > > > Here''s my xm info output after I have shutdown all the virtual > machines. > > > > [root@fedora11-x86-64-host ~]# xm list > > Name ID Mem VCPUs State > > Time(s) > > Domain-0 0 2812 2 r----- > > 3242.5 > > [root@fedora11-x86-64-host ~]# xm info > > host : fedora11-x86-64-host > > release : 2.6.30-rc3-enming.teo-tip > > version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 > > machine : x86_64 > > nr_cpus : 2 > > nr_nodes : 1 > > cores_per_socket : 2 > > threads_per_core : 1 > > cpu_mhz : 2800 > > hw_caps : > > > bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001:00000000 > > virt_caps : hvm hvm_directio > > total_memory : 6039 > > free_memory : 3124 > > node_to_cpu : node0:0-1 > > node_to_memory : node0:3124 > > xen_major : 3 > > xen_minor : 5 > > xen_extra : -unstable > > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > > hvm-3.0-x86_32p hvm-3.0-x86_64 > > xen_scheduler : credit > > xen_pagesize : 4096 > > platform_params : virt_start=0xffff800000000000 > > xen_changeset : Tue Sep 01 11:34:31 2009 +0100 > 20143:a7de5bd776ca > > xen_commandline : iommu=1 > > cc_compiler : gcc version 4.4.1 20090725 (Red Hat 4.4.1-2) > > (GCC) > > cc_compile_by : root > > cc_compile_domain : (none) > > cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 > > xend_config_format : 4 > > > > -- > > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > > Engineering) > > Alma Maters: > > (1) Singapore Polytechnic > > (2) National University of Singapore > > My Primary Blog: [1]http://teo-en-ming-aka-zhang-enming.blogspot.com > > My Secondary Blog: [2]http://enmingteo.wordpress.com > > My Youtube videos: [3]http://www.youtube.com/user/enmingteo > > Email: [4]space.time.universe@gmail.com > > Mobile Phone (Starhub Prepaid): +65-8369-2618 > > Street: Bedok Reservoir Road > > Country: Singapore > > > > On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen <[5]pasik@iki.fi> > wrote: > > > > On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming (Zhang > Enming) > > wrote: > > > Hi, > > > > > > Please watch this 4-minute video at > > > [1][6]http://www.youtube.com/watch?v=LbLaPpwNAx4 > > > > > > I have only started 3 HVM Linux guests with 1 GB ram each. I > can''t > > start > > > the 4th HVM guest. If I attempt to start the 4th instance, it > will > > crash > > > dom0. > > > > > > Are there anything in the xm dmesg output that could explain > the > > low limit > > > to the number of VMs that I could start before dom0 becomes > > unresponsive? > > > > > > > Have you limited dom0 memory (by specifying dom0_mem=XMB option in > > grub.conf for xen.gz) ? > > > > What does "xm info" say about free memory before starting any > guests? > > -- Pasi > > > > References > > > > Visible links > > 1. http://teo-en-ming-aka-zhang-enming.blogspot.com/ > > 2. http://enmingteo.wordpress.com/ > > 3. http://www.youtube.com/user/enmingteo > > 4. mailto:space.time.universe@gmail.com > > 5. mailto:pasik@iki.fi > > 6. http://www.youtube.com/watch?v=LbLaPpwNAx4 >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Nov-09 12:18 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
On Mon, Nov 09, 2009 at 08:14:27PM +0800, Mr. Teo En Ming (Zhang Enming) wrote:> What is a good value for dom0_mem if I want to start X server and run > GNOME? Will 512 MB be too little? >Go for 1024 MB then.. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 13:10 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
Great! After setting dom0_mem=1024M, I can start all 5 nodes of my Rocks HPC cluster without crashing dom0 as compared to the previous limit of 3 nodes when I did not set dom0_mem. Thank you Pasi! Another resource problem solved. Here''s my latest grub.conf: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_fedora11_host-lv_root # initrd /initrd-version.img #boot=/dev/sda default=4 timeout=100 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.31.5-xen-enming.teo) root (hd0,0) # kernel /vmlinuz-2.6.31.5-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.31.5-xen-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.31.5-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset module /initrd-2.6.31.5-xen-enming.teo.img title Fedora (2.6.31.4-xen-enming.teo) root (hd0,0) # kernel /vmlinuz-2.6.31.4-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.31.4-xen-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.31.4-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset module /initrd-2.6.31.4-xen-enming.teo.img title Fedora (2.6.31.1-xen-enming.teo) root (hd0,0) # kernel /vmlinuz-2.6.31.1-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.31.1-xen-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.31.1-xen-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset module /initrd-2.6.31.1-xen-enming.teo.img title Fedora (2.6.31-enming.teo) root (hd0,0) kernel /vmlinuz-2.6.31-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 initrd /initrd-2.6.31-enming.teo.img title Fedora (2.6.30.5-enming.teo) root (hd0,0) kernel /vmlinuz-2.6.30.5-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 initrd /initrd-2.6.30.5-enming.teo.img title Fedora (2.6.18.8-enming.teo) root (hd0,0) # kernel /vmlinuz-2.6.18.8-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.18.8-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.18.8-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root module /initrd-2.6.18.8-enming.teo.img title Fedora (2.6.31-rc6-enming.teo) with Serial Console root (hd0,0) # kernel /vmlinuz-2.6.31-rc6-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.31-rc6-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 iommu_inclusive_mapping=1 com1=115200,8n1 console=com1 # module /vmlinuz-2.6.31-rc6-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 xencons=ttyS0 console=ttyS0,115200 module /vmlinuz-2.6.31-rc6-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 console=hvc0 earlyprintk=xen module /initrd-2.6.31-rc6-enming.teo.img title Fedora (2.6.30-rc3-enming.teo-tip) with Serial Console root (hd0,0) # kernel /vmlinuz-2.6.30-rc3-enming.teo-tip ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.30-rc3-enming.teo-tip.img kernel /xen.gz dom0_mem=1024M iommu=1 iommu_inclusive_mapping=1 com1=115200,8n1 console=com1 # module /vmlinuz-2.6.30-rc3-enming.teo-tip ro root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 xencons=ttyS0 console=ttyS0,115200 module /vmlinuz-2.6.30-rc3-enming.teo-tip ro root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 console=hvc0 earlyprintk=xen module /initrd-2.6.30-rc3-enming.teo-tip.img title Fedora (2.6.31-rc6-enming.teo) root (hd0,0) # kernel /vmlinuz-2.6.31-rc6-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.31-rc6-enming.teo.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.31-rc6-enming.teo ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset module /initrd-2.6.31-rc6-enming.teo.img title Fedora (2.6.30-rc3-enming.teo-tip) root (hd0,0) # kernel /vmlinuz-2.6.30-rc3-enming.teo-tip ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 # initrd /initrd-2.6.30-rc3-enming.teo-tip.img kernel /xen.gz dom0_mem=1024M iommu=1 module /vmlinuz-2.6.30-rc3-enming.teo-tip ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset module /initrd-2.6.30-rc3-enming.teo-tip.img title Fedora (2.6.29.4-167.fc11.x86_64) root (hd0,0) kernel /vmlinuz-2.6.29.4-167.fc11.x86_64 ro root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 initrd /initrd-2.6.29.4-167.fc11.x86_64.img -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 8:18 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Mon, Nov 09, 2009 at 08:14:27PM +0800, Mr. Teo En Ming (Zhang Enming) > wrote: > > What is a good value for dom0_mem if I want to start X server and run > > GNOME? Will 512 MB be too little? > > > > Go for 1024 MB then.. > > -- Pasi > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 14:37 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
Hi All, I have discovered that setting dom0_mem also solves another problem I am facing. Previously I have complained that, after starting a HVM guest, in any pv-ops dom0 kernels 2.6.31.X, dom0 will be very slow, sluggish, and unresponsive, such that it is nearly impossible to start another HVM virtual machine. Now, after setting dom0_mem, I booted up into pvops dom0 kernel 2.6.31.5, I started all 5 Rocks HPC cluster compute nodes at one go. And guess what? Dom0 is not even sluggish! I could still do desktop screen video capturing! Voila! Setting dom0_mem is really killing two birds with one stone. It raises the number of VMs that I can start and also resolves the sluggishness in dom0 (pvops kernels 2.6.31.X affected; 2.6.30-rc3 is NOT affected) after starting a virtual machine. Setting dom0_mem really does wonders. Thank you Pasi! -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 9:10 PM, Mr. Teo En Ming (Zhang Enming) < space.time.universe@gmail.com> wrote:> Great! After setting dom0_mem=1024M, I can start all 5 nodes of my Rocks > HPC cluster without crashing dom0 as compared to the previous limit of 3 > nodes when I did not set dom0_mem. > > Thank you Pasi! Another resource problem solved. > > Here''s my latest grub.conf: > > # grub.conf generated by anaconda > # > # Note that you do not have to rerun grub after making changes to this file > # NOTICE: You have a /boot partition. This means that > # all kernel and initrd paths are relative to /boot/, eg. > # root (hd0,0) > # kernel /vmlinuz-version ro > root=/dev/mapper/vg_fedora11_host-lv_root > # initrd /initrd-version.img > #boot=/dev/sda > default=4 > timeout=100 > splashimage=(hd0,0)/grub/splash.xpm.gz > hiddenmenu > title Fedora (2.6.31.5-xen-enming.teo) > root (hd0,0) > # kernel /vmlinuz-2.6.31.5-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.31.5-xen-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.31.5-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset > module /initrd-2.6.31.5-xen-enming.teo.img > title Fedora (2.6.31.4-xen-enming.teo) > root (hd0,0) > # kernel /vmlinuz-2.6.31.4-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.31.4-xen-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.31.4-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset > module /initrd-2.6.31.4-xen-enming.teo.img > title Fedora (2.6.31.1-xen-enming.teo) > root (hd0,0) > # kernel /vmlinuz-2.6.31.1-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.31.1-xen-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.31.1-xen-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset > module /initrd-2.6.31.1-xen-enming.teo.img > title Fedora (2.6.31-enming.teo) > root (hd0,0) > kernel /vmlinuz-2.6.31-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > initrd /initrd-2.6.31-enming.teo.img > title Fedora (2.6.30.5-enming.teo) > root (hd0,0) > kernel /vmlinuz-2.6.30.5-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > initrd /initrd-2.6.30.5-enming.teo.img > title Fedora (2.6.18.8-enming.teo) > root (hd0,0) > # kernel /vmlinuz-2.6.18.8-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.18.8-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.18.8-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root > module /initrd-2.6.18.8-enming.teo.img > title Fedora (2.6.31-rc6-enming.teo) with Serial Console > root (hd0,0) > # kernel /vmlinuz-2.6.31-rc6-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.31-rc6-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 iommu_inclusive_mapping=1 > com1=115200,8n1 console=com1 > # module /vmlinuz-2.6.31-rc6-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 xencons=ttyS0 > console=ttyS0,115200 > module /vmlinuz-2.6.31-rc6-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 console=hvc0 > earlyprintk=xen > module /initrd-2.6.31-rc6-enming.teo.img > title Fedora (2.6.30-rc3-enming.teo-tip) with Serial Console > root (hd0,0) > # kernel /vmlinuz-2.6.30-rc3-enming.teo-tip ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.30-rc3-enming.teo-tip.img > kernel /xen.gz dom0_mem=1024M iommu=1 iommu_inclusive_mapping=1 > com1=115200,8n1 console=com1 > # module /vmlinuz-2.6.30-rc3-enming.teo-tip ro > root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 xencons=ttyS0 > console=ttyS0,115200 > module /vmlinuz-2.6.30-rc3-enming.teo-tip ro > root=/dev/mapper/vg_fedora11_host-lv_root selinux=0 console=hvc0 > earlyprintk=xen > module /initrd-2.6.30-rc3-enming.teo-tip.img > > title Fedora (2.6.31-rc6-enming.teo) > root (hd0,0) > # kernel /vmlinuz-2.6.31-rc6-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.31-rc6-enming.teo.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.31-rc6-enming.teo ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset > module /initrd-2.6.31-rc6-enming.teo.img > title Fedora (2.6.30-rc3-enming.teo-tip) > root (hd0,0) > # kernel /vmlinuz-2.6.30-rc3-enming.teo-tip ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > # initrd /initrd-2.6.30-rc3-enming.teo-tip.img > kernel /xen.gz dom0_mem=1024M iommu=1 > module /vmlinuz-2.6.30-rc3-enming.teo-tip ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 nomodeset > module /initrd-2.6.30-rc3-enming.teo-tip.img > title Fedora (2.6.29.4-167.fc11.x86_64) > root (hd0,0) > kernel /vmlinuz-2.6.29.4-167.fc11.x86_64 ro > root=/dev/mapper/vg_fedora11_host-lv_root rhgb quiet selinux=0 > initrd /initrd-2.6.29.4-167.fc11.x86_64.img > > > -- > Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical > Engineering) > Alma Maters: > (1) Singapore Polytechnic > (2) National University of Singapore > My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com > My Secondary Blog: http://enmingteo.wordpress.com > My Youtube videos: http://www.youtube.com/user/enmingteo > Email: space.time.universe@gmail.com > Mobile Phone (Starhub Prepaid): +65-8369-2618 > Street: Bedok Reservoir Road > Country: Singapore > > On Mon, Nov 9, 2009 at 8:18 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > >> On Mon, Nov 09, 2009 at 08:14:27PM +0800, Mr. Teo En Ming (Zhang Enming) >> wrote: >> > What is a good value for dom0_mem if I want to start X server and run >> > GNOME? Will 512 MB be too little? >> > >> >> Go for 1024 MB then.. >> >> -- Pasi >> >> > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Couchman
2009-Nov-09 15:06 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
>>> On 2009/11/09 at 05:05, Pasi Kärkkäinen<pasik@iki.fi> wrote: > On Mon, Nov 09, 2009 at 08:01:00PM +0800, Mr. Teo En Ming (ZhangEnming)> wrote: >> No, I didn't limit dom0 memory in grub.conf. >> > > You should.Really? I thought current conventional wisdom was to allow Xen to self-manage memory in both dom0 and domUs, and not to manually adjust this? I run several Xen systems with anywhere from 8 to 24 GB of RAM and 20 to 30 domUs on some of these systems and have *never* specified the dom0 memory at boot time - the Xen ballooning has always functioned perfectly fine, and never crashed my dom0. Furthermore, while I'm not Linux developer and so not familiar with how Linux calculates buffering and caching, I do know that my Linux systems dynamically manage buffers and caches, and when memory is reduced or some application requires a larger amount of physical memory, Linux reduces the amount of data in buffers and caches. Of course, a lot of this depends on what you're doing in dom0 - on my Xen servers, my dom0 is strictly for Xen management - I'm not running anything else in dom0 that would require large amounts of memory, memory buffers and caches, etc. -Nick> > If dom0 has all the memory at boot time, you need to balloon downdom0> memory every time you create a new guest - this can (and will) cause> problems with the dom0 linux kernel. > > Linux calculates some internal parameters/buffers/values based onthe> _boot time_ amount of memory. And when the amount of memory goes downto> only a small fraction of that while creating new guests bad thingscan> happen.. > > It still shouldn't crash though.. I bet your problem will get fixedwhen> you limit the dom0 memory to say dom0_mem=512M and reboot. > > -- Pasi > >> Here's my xm info output after I have shutdown all the virtualmachines.>> >> [root@fedora11-x86-64-host ~]# xm list >> Name ID Mem VCPUsState>> Time(s) >> Domain-0 0 2812 2r----->> 3242.5 >> [root@fedora11-x86-64-host ~]# xm info >> host : fedora11-x86-64-host >> release : 2.6.30-rc3-enming.teo-tip >> version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 >> machine : x86_64 >> nr_cpus : 2 >> nr_nodes : 1 >> cores_per_socket : 2 >> threads_per_core : 1 >> cpu_mhz : 2800 >> hw_caps : >>bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001:00000000>> virt_caps : hvm hvm_directio >> total_memory : 6039 >> free_memory : 3124 >> node_to_cpu : node0:0-1 >> node_to_memory : node0:3124 >> xen_major : 3 >> xen_minor : 5 >> xen_extra : -unstable >> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32phvm-3.0-x86_32>> hvm-3.0-x86_32p hvm-3.0-x86_64 >> xen_scheduler : credit >> xen_pagesize : 4096 >> platform_params : virt_start=0xffff800000000000 >> xen_changeset : Tue Sep 01 11:34:31 2009 +0100 > 20143:a7de5bd776ca >> xen_commandline : iommu=1 >> cc_compiler : gcc version 4.4.1 20090725 (Red Hat4.4.1-2)>> (GCC) >> cc_compile_by : root >> cc_compile_domain : (none) >> cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 >> xend_config_format : 4 >> >> -- >> Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics)BEng(Hons)(Mechanical>> Engineering) >> Alma Maters: >> (1) Singapore Polytechnic >> (2) National University of Singapore >> My Primary Blog:[1]http://teo-en-ming-aka-zhang-enming.blogspot.com>> My Secondary Blog: [2]http://enmingteo.wordpress.com >> My Youtube videos: [3]http://www.youtube.com/user/enmingteo >> Email: [4]space.time.universe@gmail.com >> Mobile Phone (Starhub Prepaid): +65-8369-2618 >> Street: Bedok Reservoir Road >> Country: Singapore >> >> On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen <[5]pasik@iki.fi>wrote:>> >> On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming(Zhang> Enming) >> wrote: >> > Hi, >> > >> > Please watch this 4-minute video at >> > [1][6]http://www.youtube.com/watch?v=LbLaPpwNAx4 >> > >> > I have only started 3 HVM Linux guests with 1 GB ram each.I can't>> start >> > the 4th HVM guest. If I attempt to start the 4th instance,it will>> crash >> > dom0. >> > >> > Are there anything in the xm dmesg output that couldexplain the>> low limit >> > to the number of VMs that I could start before dom0becomes>> unresponsive? >> > >> >> Have you limited dom0 memory (by specifying dom0_mem=XMB optionin>> grub.conf for xen.gz) ? >> >> What does "xm info" say about free memory before starting anyguests?>> -- Pasi >> >> References >> >> Visible links >> 1. http://teo-en-ming-aka-zhang-enming.blogspot.com/ >> 2. http://enmingteo.wordpress.com/ >> 3. http://www.youtube.com/user/enmingteo >> 4. mailto:space.time.universe@gmail.com >> 5. mailto:pasik@iki.fi >> 6. http://www.youtube.com/watch?v=LbLaPpwNAx4-------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2009-Nov-09 15:17 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
On Mon, Nov 09, 2009 at 08:06:54AM -0700, Nick Couchman wrote:> > > >>> On 2009/11/09 at 05:05, Pasi Kärkkäinen<pasik@iki.fi> wrote: > > On Mon, Nov 09, 2009 at 08:01:00PM +0800, Mr. Teo En Ming (Zhang > Enming) > > wrote: > >> No, I didn''t limit dom0 memory in grub.conf. > >> > > > > You should. > > Really? I thought current conventional wisdom was to allow Xen to > self-manage memory in both dom0 and domUs, and not to manually adjust > this? I run several Xen systems with anywhere from 8 to 24 GB of RAM > and 20 to 30 domUs on some of these systems and have *never* specified > the dom0 memory at boot time - the Xen ballooning has always functioned > perfectly fine, and never crashed my dom0. >Yes, Xen is totally OK with this, but dom0 Linux has more problems..> Furthermore, while I''m not > Linux developer and so not familiar with how Linux calculates buffering > and caching, I do know that my Linux systems dynamically manage buffers > and caches, and when memory is reduced or some application requires a > larger amount of physical memory, Linux reduces the amount of data in > buffers and caches. >Yeah, it has to do with sizing the network buffers, caches etc.. It shouldn''t _crash_, so Teo is seeing some bug I believe. But it has always been "best practice" to limit dom0 memory - and prevent weird things happening later (like "memory squeeze in netback driver").> Of course, a lot of this depends on what you''re doing in dom0 - on my > Xen servers, my dom0 is strictly for Xen management - I''m not running > anything else in dom0 that would require large amounts of memory, memory > buffers and caches, etc. >Teo is running graphical stuff, X etc so it''s a bit different.. -- Pasi> > > > > If dom0 has all the memory at boot time, you need to balloon down > dom0 > > memory every time you create a new guest - this can (and will) cause > > > problems with the dom0 linux kernel. > > > > Linux calculates some internal parameters/buffers/values based on > the > > _boot time_ amount of memory. And when the amount of memory goes down > to > > only a small fraction of that while creating new guests bad things > can > > happen.. > > > > It still shouldn''t crash though.. I bet your problem will get fixed > when > > you limit the dom0 memory to say dom0_mem=512M and reboot. > > > > -- Pasi > > > >> Here''s my xm info output after I have shutdown all the virtual > machines. > >> > >> [root@fedora11-x86-64-host ~]# xm list > >> Name ID Mem VCPUs > State > >> Time(s) > >> Domain-0 0 2812 2 > r----- > >> 3242.5 > >> [root@fedora11-x86-64-host ~]# xm info > >> host : fedora11-x86-64-host > >> release : 2.6.30-rc3-enming.teo-tip > >> version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 > >> machine : x86_64 > >> nr_cpus : 2 > >> nr_nodes : 1 > >> cores_per_socket : 2 > >> threads_per_core : 1 > >> cpu_mhz : 2800 > >> hw_caps : > >> > bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001:00000000 > >> virt_caps : hvm hvm_directio > >> total_memory : 6039 > >> free_memory : 3124 > >> node_to_cpu : node0:0-1 > >> node_to_memory : node0:3124 > >> xen_major : 3 > >> xen_minor : 5 > >> xen_extra : -unstable > >> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p > hvm-3.0-x86_32 > >> hvm-3.0-x86_32p hvm-3.0-x86_64 > >> xen_scheduler : credit > >> xen_pagesize : 4096 > >> platform_params : virt_start=0xffff800000000000 > >> xen_changeset : Tue Sep 01 11:34:31 2009 +0100 > > 20143:a7de5bd776ca > >> xen_commandline : iommu=1 > >> cc_compiler : gcc version 4.4.1 20090725 (Red Hat > 4.4.1-2) > >> (GCC) > >> cc_compile_by : root > >> cc_compile_domain : (none) > >> cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 > >> xend_config_format : 4 > >> > >> -- > >> Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) > BEng(Hons)(Mechanical > >> Engineering) > >> Alma Maters: > >> (1) Singapore Polytechnic > >> (2) National University of Singapore > >> My Primary Blog: > [1]http://teo-en-ming-aka-zhang-enming.blogspot.com > >> My Secondary Blog: [2]http://enmingteo.wordpress.com > >> My Youtube videos: [3]http://www.youtube.com/user/enmingteo > >> Email: [4]space.time.universe@gmail.com > >> Mobile Phone (Starhub Prepaid): +65-8369-2618 > >> Street: Bedok Reservoir Road > >> Country: Singapore > >> > >> On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen <[5]pasik@iki.fi> > wrote: > >> > >> On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming > (Zhang > > Enming) > >> wrote: > >> > Hi, > >> > > >> > Please watch this 4-minute video at > >> > [1][6]http://www.youtube.com/watch?v=LbLaPpwNAx4 > >> > > >> > I have only started 3 HVM Linux guests with 1 GB ram each. > I can''t > >> start > >> > the 4th HVM guest. If I attempt to start the 4th instance, > it will > >> crash > >> > dom0. > >> > > >> > Are there anything in the xm dmesg output that could > explain the > >> low limit > >> > to the number of VMs that I could start before dom0 > becomes > >> unresponsive? > >> > > >> > >> Have you limited dom0 memory (by specifying dom0_mem=XMB option > in > >> grub.conf for xen.gz) ? > >> > >> What does "xm info" say about free memory before starting any > guests? > >> -- Pasi > >> > >> References > >> > >> Visible links > >> 1. http://teo-en-ming-aka-zhang-enming.blogspot.com/ > >> 2. http://enmingteo.wordpress.com/ > >> 3. http://www.youtube.com/user/enmingteo > >> 4. mailto:space.time.universe@gmail.com > >> 5. mailto:pasik@iki.fi > >> 6. http://www.youtube.com/watch?v=LbLaPpwNAx4 > > > > -------- > This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 09/11/2009 15:06, "Nick Couchman" <Nick.Couchman@seakr.com> wrote:> Really? I thought current conventional wisdom was to allow Xen to > self-manage memory in both dom0 and domUs, and not to manually adjust > this? I run several Xen systems with anywhere from 8 to 24 GB of RAM > and 20 to 30 domUs on some of these systems and have *never* specified > the dom0 memory at boot time - the Xen ballooning has always functioned > perfectly fine, and never crashed my dom0. Furthermore, while I''m not > Linux developer and so not familiar with how Linux calculates buffering > and caching, I do know that my Linux systems dynamically manage buffers > and caches, and when memory is reduced or some application requires a > larger amount of physical memory, Linux reduces the amount of data in > buffers and caches.If you are not using dom0 as a general-purpose OS then it is a very good idea to specify dom0''s memory allowance via dom0_mem= and disable auto-ballooning in the xend-config.sxp. There are a few reasons for this, the most compelling being that Linux will have a metadata overhead for tracking memory usage, and this will be a fraction (say a percent or so) of its initial memory allocation. So, that overhead may be just 2% of 24GB, say, but then if dom0 gets ballooned down to 1GB it''ll be more like 50%! Clearly you are limited in how far you can balloon down without risking the OOM killer in dom0. Apart from that, the auto-ballooner has been implicated in various quirky bugs in the past -- failing domain creations and migrations for the most part -- so it''s nice to turn it off if you can, as that''s one less thing to fail. And if dom0 is single-purpose you should be able to work out how much memory it needs for that purpose and statically allocate it. Using auto-ballooner is actually perverse in this scenario, in that dom0 gets the least memory when it needs it the most (because it presumably has highest load when servicing the most VMs, but in that case auto-ballooner has stolen lots of memory from dom0). My 2c! -- Keir _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 09/11/2009 15:17, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:>> Furthermore, while I''m not >> Linux developer and so not familiar with how Linux calculates buffering >> and caching, I do know that my Linux systems dynamically manage buffers >> and caches, and when memory is reduced or some application requires a >> larger amount of physical memory, Linux reduces the amount of data in >> buffers and caches. >> > > Yeah, it has to do with sizing the network buffers, caches etc.. > > It shouldn''t _crash_, so Teo is seeing some bug I believe. But it has > always been "best practice" to limit dom0 memory - and prevent weird > things happening later (like "memory squeeze in netback driver").The issue is not really kernel data like network buffers and buffer cache. It is kernel memory metadata -- primarily the per-page info structure that the kernel maintains. The metadata doesn''t get shrunk with memory size when ballooning out, hence it increases as a proportion of memory still assigned to the domain. That really is significant when aggressively ballooning down a large-memory domain. -- Keir _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2009-Nov-09 15:27 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
On Mon, Nov 09, 2009 at 03:24:59PM +0000, Keir Fraser wrote:> On 09/11/2009 15:17, "Pasi Kärkkäinen" <pasik@iki.fi> wrote: > > >> Furthermore, while I''m not > >> Linux developer and so not familiar with how Linux calculates buffering > >> and caching, I do know that my Linux systems dynamically manage buffers > >> and caches, and when memory is reduced or some application requires a > >> larger amount of physical memory, Linux reduces the amount of data in > >> buffers and caches. > >> > > > > Yeah, it has to do with sizing the network buffers, caches etc.. > > > > It shouldn''t _crash_, so Teo is seeing some bug I believe. But it has > > always been "best practice" to limit dom0 memory - and prevent weird > > things happening later (like "memory squeeze in netback driver"). > > The issue is not really kernel data like network buffers and buffer cache. > It is kernel memory metadata -- primarily the per-page info structure that > the kernel maintains. The metadata doesn''t get shrunk with memory size when > ballooning out, hence it increases as a proportion of memory still assigned > to the domain. That really is significant when aggressively ballooning down > a large-memory domain. >Ok, thanks. That makes sense. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nick Couchman
2009-Nov-09 15:29 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
Thanks for the information! Looks like I''ll be adjusting some boot-time options on my Xen servers. I have seen a couple of issues now and then with either migration or starting a domU, but it happens once every few months at the most, and usually I blame the migration issues on a fault network connection or something like that. I''ll have to try out limiting my dom0s to 1 or 2 GB of RAM and see if those issues go away! Thanks! -Nick>>> On 2009/11/09 at 08:18, Keir Fraser <keir.fraser@eu.citrix.com> wrote: > On 09/11/2009 15:06, "Nick Couchman" <Nick.Couchman@seakr.com> wrote: > >> Really? I thought current conventional wisdom was to allow Xen to >> self-manage memory in both dom0 and domUs, and not to manually adjust >> this? I run several Xen systems with anywhere from 8 to 24 GB of RAM >> and 20 to 30 domUs on some of these systems and have *never* specified >> the dom0 memory at boot time - the Xen ballooning has always functioned >> perfectly fine, and never crashed my dom0. Furthermore, while I''m not >> Linux developer and so not familiar with how Linux calculates buffering >> and caching, I do know that my Linux systems dynamically manage buffers >> and caches, and when memory is reduced or some application requires a >> larger amount of physical memory, Linux reduces the amount of data in >> buffers and caches. > > If you are not using dom0 as a general-purpose OS then it is a very good > idea to specify dom0''s memory allowance via dom0_mem= and disable > auto-ballooning in the xend-config.sxp. There are a few reasons for this, > the most compelling being that Linux will have a metadata overhead for > tracking memory usage, and this will be a fraction (say a percent or so) of > its initial memory allocation. So, that overhead may be just 2% of 24GB, > say, but then if dom0 gets ballooned down to 1GB it''ll be more like 50%! > Clearly you are limited in how far you can balloon down without risking the > OOM killer in dom0. > > Apart from that, the auto-ballooner has been implicated in various quirky > bugs in the past -- failing domain creations and migrations for the most > part -- so it''s nice to turn it off if you can, as that''s one less thing to > fail. And if dom0 is single-purpose you should be able to work out how much > memory it needs for that purpose and statically allocate it. Using > auto-ballooner is actually perverse in this scenario, in that dom0 gets the > least memory when it needs it the most (because it presumably has highest > load when servicing the most VMs, but in that case auto-ballooner has stolen > lots of memory from dom0). > > My 2c! > > -- Keir-------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2009-Nov-09 15:39 UTC
RE: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
Hmmm... I''ll bet the problem with dom0 crashing is that dom0 is pv_ops (2.6.31-based) and this patch, which has been in 2.6.18.8-based dom0 for some time, has never been put into pv_ops: http://lists.xensource.com/archives/html/xen-devel/2008-04/msg00143.html That would explain why some people are seeing this problem and others are not, and why setting dom0_mem seems to solve the problem (as dom0_mem effectively shuts off ballooning in dom0). Nick and others, if you are interested in detail on ballooning and improving memory utilization within and between guests, see: http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Memory+Overcommit.pdf and http://oss.oracle.com/projects/tmem (Tmem is in xen-unstable and in Oracle VM 2.2... it takes a few steps to set it up.)> -----Original Message----- > From: Pasi Kärkkäinen [mailto:pasik@iki.fi] > Sent: Monday, November 09, 2009 8:18 AM > To: Nick Couchman > Cc: xen-devel@lists.xensource.com; Mr. Teo En Ming (Zhang Enming); > xen-users@lists.xensource.com; Robert Dunkley > Subject: Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests > > > On Mon, Nov 09, 2009 at 08:06:54AM -0700, Nick Couchman wrote: > > > > > > >>> On 2009/11/09 at 05:05, Pasi Kärkkäinen<pasik@iki.fi> wrote: > > > On Mon, Nov 09, 2009 at 08:01:00PM +0800, Mr. Teo En Ming (Zhang > > Enming) > > > wrote: > > >> No, I didn''t limit dom0 memory in grub.conf. > > >> > > > > > > You should. > > > > Really? I thought current conventional wisdom was to allow Xen to > > self-manage memory in both dom0 and domUs, and not to > manually adjust > > this? I run several Xen systems with anywhere from 8 to 24 > GB of RAM > > and 20 to 30 domUs on some of these systems and have > *never* specified > > the dom0 memory at boot time - the Xen ballooning has > always functioned > > perfectly fine, and never crashed my dom0. > > > > Yes, Xen is totally OK with this, but dom0 Linux has more problems.. > > > Furthermore, while I''m not > > Linux developer and so not familiar with how Linux > calculates buffering > > and caching, I do know that my Linux systems dynamically > manage buffers > > and caches, and when memory is reduced or some application > requires a > > larger amount of physical memory, Linux reduces the amount > of data in > > buffers and caches. > > > > Yeah, it has to do with sizing the network buffers, caches etc.. > > It shouldn''t _crash_, so Teo is seeing some bug I believe. But it has > always been "best practice" to limit dom0 memory - and prevent weird > things happening later (like "memory squeeze in netback driver"). > > > Of course, a lot of this depends on what you''re doing in > dom0 - on my > > Xen servers, my dom0 is strictly for Xen management - I''m > not running > > anything else in dom0 that would require large amounts of > memory, memory > > buffers and caches, etc. > > > > Teo is running graphical stuff, X etc so it''s a bit different.. > > -- Pasi > > > > > > > > > If dom0 has all the memory at boot time, you need to balloon down > > dom0 > > > memory every time you create a new guest - this can (and > will) cause > > > > > problems with the dom0 linux kernel. > > > > > > Linux calculates some internal parameters/buffers/values based on > > the > > > _boot time_ amount of memory. And when the amount of > memory goes down > > to > > > only a small fraction of that while creating new guests bad things > > can > > > happen.. > > > > > > It still shouldn''t crash though.. I bet your problem will > get fixed > > when > > > you limit the dom0 memory to say dom0_mem=512M and reboot. > > > > > > -- Pasi > > > > > >> Here''s my xm info output after I have shutdown all the virtual > > machines. > > >> > > >> [root@fedora11-x86-64-host ~]# xm list > > >> Name ID Mem > VCPUs > > State > > >> Time(s) > > >> Domain-0 0 2812 2 > > r----- > > >> 3242.5 > > >> [root@fedora11-x86-64-host ~]# xm info > > >> host : fedora11-x86-64-host > > >> release : 2.6.30-rc3-enming.teo-tip > > >> version : #1 SMP Wed Aug 19 23:14:15 SGT 2009 > > >> machine : x86_64 > > >> nr_cpus : 2 > > >> nr_nodes : 1 > > >> cores_per_socket : 2 > > >> threads_per_core : 1 > > >> cpu_mhz : 2800 > > >> hw_caps : > > >> > > > bfebfbff:20100800:00000000:00000140:0400e3bd:00000000:00000001 > :00000000 > > >> virt_caps : hvm hvm_directio > > >> total_memory : 6039 > > >> free_memory : 3124 > > >> node_to_cpu : node0:0-1 > > >> node_to_memory : node0:3124 > > >> xen_major : 3 > > >> xen_minor : 5 > > >> xen_extra : -unstable > > >> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p > > hvm-3.0-x86_32 > > >> hvm-3.0-x86_32p hvm-3.0-x86_64 > > >> xen_scheduler : credit > > >> xen_pagesize : 4096 > > >> platform_params : virt_start=0xffff800000000000 > > >> xen_changeset : Tue Sep 01 11:34:31 2009 +0100 > > > 20143:a7de5bd776ca > > >> xen_commandline : iommu=1 > > >> cc_compiler : gcc version 4.4.1 20090725 (Red Hat > > 4.4.1-2) > > >> (GCC) > > >> cc_compile_by : root > > >> cc_compile_domain : (none) > > >> cc_compile_date : Thu Sep 10 07:01:13 SGT 2009 > > >> xend_config_format : 4 > > >> > > >> -- > > >> Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) > > BEng(Hons)(Mechanical > > >> Engineering) > > >> Alma Maters: > > >> (1) Singapore Polytechnic > > >> (2) National University of Singapore > > >> My Primary Blog: > > [1]http://teo-en-ming-aka-zhang-enming.blogspot.com > > >> My Secondary Blog: [2]http://enmingteo.wordpress.com > > >> My Youtube videos: [3]http://www.youtube.com/user/enmingteo > > >> Email: [4]space.time.universe@gmail.com > > >> Mobile Phone (Starhub Prepaid): +65-8369-2618 > > >> Street: Bedok Reservoir Road > > >> Country: Singapore > > >> > > >> On Mon, Nov 9, 2009 at 7:54 PM, Pasi Kärkkäinen > <[5]pasik@iki.fi> > > wrote: > > >> > > >> On Mon, Nov 09, 2009 at 06:52:37PM +0800, Mr. Teo En Ming > > (Zhang > > > Enming) > > >> wrote: > > >> > Hi, > > >> > > > >> > Please watch this 4-minute video at > > >> > [1][6]http://www.youtube.com/watch?v=LbLaPpwNAx4 > > >> > > > >> > I have only started 3 HVM Linux guests with 1 > GB ram each. > > I can''t > > >> start > > >> > the 4th HVM guest. If I attempt to start the > 4th instance, > > it will > > >> crash > > >> > dom0. > > >> > > > >> > Are there anything in the xm dmesg output that could > > explain the > > >> low limit > > >> > to the number of VMs that I could start before dom0 > > becomes > > >> unresponsive? > > >> > > > >> > > >> Have you limited dom0 memory (by specifying > dom0_mem=XMB option > > in > > >> grub.conf for xen.gz) ? > > >> > > >> What does "xm info" say about free memory before > starting any > > guests? > > >> -- Pasi > > >> > > >> References > > >> > > >> Visible links > > >> 1. http://teo-en-ming-aka-zhang-enming.blogspot.com/ > > >> 2. http://enmingteo.wordpress.com/ > > >> 3. http://www.youtube.com/user/enmingteo > > >> 4. mailto:space.time.universe@gmail.com > > >> 5. mailto:pasik@iki.fi > > >> 6. http://www.youtube.com/watch?v=LbLaPpwNAx4 > > > > > > > > -------- > > This e-mail may contain confidential and privileged > material for the sole use of the intended recipient. If this > email is not intended for you, or you are not responsible for > the delivery of this message to the intended recipient, > please note that this message may contain SEAKR Engineering > (SEAKR) Privileged/Proprietary Information. In such a case, > you are strictly prohibited from downloading, photocopying, > distributing or otherwise using this message, its contents or > attachments in any way. If you have received this message in > error, please notify us immediately by replying to this > e-mail and delete the message from your mailbox. Information > contained in this message that does not relate to the > business of SEAKR is neither endorsed by nor attributable to SEAKR. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Mr. Teo En Ming (Zhang Enming)
2009-Nov-09 15:41 UTC
Re: [Xen-devel] Re: [Xen-users] Max. PV and HVM Guests
Hi, This is the new video demo of my Rocks HPC compute cluster after I have set dom0_mem=1024M for my Xen hypervisor. I started all 5 nodes at one go without crashing and without sluggishness. Please watch the video at http://www.youtube.com/watch?v=vWHIImVBr4o It''s only 6 minutes. Previous video demo shows that I can only start 3 nodes with-out setting dom0_mem for the Xen hypervisor. If I try to start the 4th node, dom0 will freeze. This is proof that setting dom0_mem really works and improves overall system performance. -- Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical Engineering) Alma Maters: (1) Singapore Polytechnic (2) National University of Singapore My Primary Blog: http://teo-en-ming-aka-zhang-enming.blogspot.com My Secondary Blog: http://enmingteo.wordpress.com My Youtube videos: http://www.youtube.com/user/enmingteo Email: space.time.universe@gmail.com Mobile Phone (Starhub Prepaid): +65-8369-2618 Street: Bedok Reservoir Road Country: Singapore On Mon, Nov 9, 2009 at 11:29 PM, Nick Couchman <Nick.Couchman@seakr.com>wrote:> Thanks for the information! Looks like I''ll be adjusting some boot-time > options on my Xen servers. I have seen a couple of issues now and then with > either migration or starting a domU, but it happens once every few months at > the most, and usually I blame the migration issues on a fault network > connection or something like that. I''ll have to try out limiting my dom0s > to 1 or 2 GB of RAM and see if those issues go away! > > Thanks! > -Nick > > >>> On 2009/11/09 at 08:18, Keir Fraser <keir.fraser@eu.citrix.com> wrote: > > On 09/11/2009 15:06, "Nick Couchman" <Nick.Couchman@seakr.com> wrote: > > > >> Really? I thought current conventional wisdom was to allow Xen to > >> self-manage memory in both dom0 and domUs, and not to manually adjust > >> this? I run several Xen systems with anywhere from 8 to 24 GB of RAM > >> and 20 to 30 domUs on some of these systems and have *never* specified > >> the dom0 memory at boot time - the Xen ballooning has always functioned > >> perfectly fine, and never crashed my dom0. Furthermore, while I''m not > >> Linux developer and so not familiar with how Linux calculates buffering > >> and caching, I do know that my Linux systems dynamically manage buffers > >> and caches, and when memory is reduced or some application requires a > >> larger amount of physical memory, Linux reduces the amount of data in > >> buffers and caches. > > > > If you are not using dom0 as a general-purpose OS then it is a very good > > idea to specify dom0''s memory allowance via dom0_mem= and disable > > auto-ballooning in the xend-config.sxp. There are a few reasons for this, > > the most compelling being that Linux will have a metadata overhead for > > tracking memory usage, and this will be a fraction (say a percent or so) > of > > its initial memory allocation. So, that overhead may be just 2% of 24GB, > > say, but then if dom0 gets ballooned down to 1GB it''ll be more like 50%! > > Clearly you are limited in how far you can balloon down without risking > the > > OOM killer in dom0. > > > > Apart from that, the auto-ballooner has been implicated in various quirky > > bugs in the past -- failing domain creations and migrations for the most > > part -- so it''s nice to turn it off if you can, as that''s one less thing > to > > fail. And if dom0 is single-purpose you should be able to work out how > much > > memory it needs for that purpose and statically allocate it. Using > > auto-ballooner is actually perverse in this scenario, in that dom0 gets > the > > least memory when it needs it the most (because it presumably has highest > > load when servicing the most VMs, but in that case auto-ballooner has > stolen > > lots of memory from dom0). > > > > My 2c! > > > > -- Keir > > > > > -------- > This e-mail may contain confidential and privileged material for the sole > use of the intended recipient. If this email is not intended for you, or > you are not responsible for the delivery of this message to the intended > recipient, please note that this message may contain SEAKR Engineering > (SEAKR) Privileged/Proprietary Information. In such a case, you are > strictly prohibited from downloading, photocopying, distributing or > otherwise using this message, its contents or attachments in any way. If > you have received this message in error, please notify us immediately by > replying to this e-mail and delete the message from your mailbox. > Information contained in this message that does not relate to the business > of SEAKR is neither endorsed by nor attributable to SEAKR. >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users