Hi, we''re contemplating getting a large new server, where we will run a number of virtual servers. Are there any things we need to keep in mind in that case? Are there limitations on what a Xen system can manage? We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a couple of terabytes of RAIDed SATA storage. -Morten (Re-sending this, as the first message didn''t seem to go through.) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, Up to what I know, there are no limitation for Xen. Let enjoy you large new server! Nico 2009/1/1 <morten@nidelven-it.no>:> Hi, > > we''re contemplating getting a large new server, where we will run a number > of > virtual servers. Are there any things we need to keep in mind in that > case? Are > there limitations on what a Xen system can manage? > > We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a > couple of terabytes of RAIDed SATA storage. > > -Morten > > (Re-sending this, as the first message didn''t seem to go through.) > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Nicolas Daneau Rue Maurice Bertrand 543 5300 Landenne 0486/69.01.65 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, 1 Jan 2009, morten@nidelven-it.no wrote:> we''re contemplating getting a large new server, where we will run a > number of virtual servers. Are there any things we need to keep in mind > in that case? Are there limitations on what a Xen system can manage? > > We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a > couple of terabytes of RAIDed SATA storage.I have a 2 x quad core Dell PE2900 server with 24 GB memory and a couple of TB of RAID-1 SATA disk running CentOS 5.2 x86_64 and xen 3.0.3, and currently have 33 file-based guests running on it (mostly 32-bit and 64-bit linux, some Windows). All guests have 1 vcpu and 512 MB memory. This setup has been running solidly for about 6 months. Some things I have noticed: - HVM guests were a lot slower than PV guests, and there is a _lot_ of qemu-dm overhead on Dom0. In particular, 32-bit HVM guests were much slower than 64-bit HVM guests. Avoid HVM as much as possible, and if you can''t (Windows), use the Xen PV drivers. My workload consists mostly of a software development environment, so I run a lot of make''s (on Windows, under cygwin, sources and objects on Samba shares). I found that the Xen PV drivers on Windows improved performance by over three times (reduced one compilation from 6 hours to 100 minutes); Windows compile performance with cygwin is now about 80% of native speed. - I converted most of my Linux guests to diskless guests with an NFS root, with the Dom0 as NFS server. Not only do the formerly-HVM guests run much faster, but they also run faster than file-based PV guests, and now 32-bit guests are a little faster than 64-bit guests (less stuff to read from NFS, I presume). The qemu-dm overhead on Dom0 is now essentially zero. - With a large number of guests, you have to be more careful with their start/stop/start order to avoid the ''out of memory'' error. - Set xen.independent_wallclock on Linux guests if possible (Xen kernel), otherwise it will be next to impossible to keep the clocks in sync. Steve _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Steve, Do you have a link or details of an example for the cygwin compiles you use as a sort of benchmark? I use Centos 5.2 X64 setup running Xen 3.30 and don''t see any big differences between 32 and 64bit HVM performance. Disk and network performance is faster on Para but not much else. We are using some Quad Quad Core Opterons (16 Cores Total) here and it works very well. Especially for Quad CPU and above I would recommend AMD as long as you don''t need PCI passthrough. For AMD dual and quad setups it is important that NUMA is enabled. Pinning cores to VCPUs seems to help performance in most cases as well, so does disabling memory ballooning and working with fixed RAM VMs (Although some people may not have enough ram to do this). Rob -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Steve Thompson Sent: 01 January 2009 15:09 To: morten@nidelven-it.no Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] Large server, Xen limitations On Thu, 1 Jan 2009, morten@nidelven-it.no wrote:> we''re contemplating getting a large new server, where we will run a > number of virtual servers. Are there any things we need to keep inmind> in that case? Are there limitations on what a Xen system can manage? > > We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM anda> couple of terabytes of RAIDed SATA storage.I have a 2 x quad core Dell PE2900 server with 24 GB memory and a couple of TB of RAID-1 SATA disk running CentOS 5.2 x86_64 and xen 3.0.3, and currently have 33 file-based guests running on it (mostly 32-bit and 64-bit linux, some Windows). All guests have 1 vcpu and 512 MB memory. This setup has been running solidly for about 6 months. Some things I have noticed: - HVM guests were a lot slower than PV guests, and there is a _lot_ of qemu-dm overhead on Dom0. In particular, 32-bit HVM guests were much slower than 64-bit HVM guests. Avoid HVM as much as possible, and if you can''t (Windows), use the Xen PV drivers. My workload consists mostly of a software development environment, so I run a lot of make''s (on Windows, under cygwin, sources and objects on Samba shares). I found that the Xen PV drivers on Windows improved performance by over three times (reduced one compilation from 6 hours to 100 minutes); Windows compile performance with cygwin is now about 80% of native speed. - I converted most of my Linux guests to diskless guests with an NFS root, with the Dom0 as NFS server. Not only do the formerly-HVM guests run much faster, but they also run faster than file-based PV guests, and now 32-bit guests are a little faster than 64-bit guests (less stuff to read from NFS, I presume). The qemu-dm overhead on Dom0 is now essentially zero. - With a large number of guests, you have to be more careful with their start/stop/start order to avoid the ''out of memory'' error. - Set xen.independent_wallclock on Linux guests if possible (Xen kernel), otherwise it will be next to impossible to keep the clocks in sync. Steve _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member Find us in http://www.thebestof.co.uk/petersfield _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thursday 01 January 2009 08:15:09 am morten@nidelven-it.no wrote:> We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a > couple of terabytes of RAIDed SATA storage.I''d worry about your I/O with that setup if you plan to do anything disk-intensive. If you''re already spending on the 64GB RAM, buy SAS. John -- John Madden Sr. UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 2 Jan 2009, Robert Dunkley wrote:> Do you have a link or details of an example for the cygwin compiles you > use as a sort of benchmark?Unfortunately not. I did my initial testing when I installed the Dom0 last July. I didn''t keep records as it was a single-developer setup (me), although of course I recorded all my measurements at the time. I am compiling a package (of my own design) that is comprised of about 500K lines of C and C++ code (mostly C), with about 1200 separate source and header files and about 100 makefiles. It is a fully native application on Windows, but I use cygwin with gcc/g++ -mno-cygwin and GNU make since the same makefiles can then be used on Windows as on the other platforms that are supported (RHEL3/4/5, CentOS 3/4/5, Fedora, Tru64, OSX). On Linux guests, the sources come from an NFS-mounted volume, and the objects go to a (different) NFS-mounted volume. On Windows, the same sources come from a Samba share, and the objects go to a (different) Samba share. Although I have 33 guests at present, I do the build on only 14 of them; the others are for something else. The NFS server and Samba server are on the Dom0 (CentOS 5.2 x86_64, Dell PE2900 with 8 cores and 24 GB). I found that 64-bit HVM guests were about 30% faster than 32-bit guests (where "faster" in this case relates to the elapsed time of a full build). In their current NFS-root configuration, 32-bit guests are a little faster than 64-bit, and 64-bit guests are about 50% faster than the same guests in HVM form. Performance of HVM Windows guests was dreadful until I installed PV drivers. I still have four fully-HVM guests (RHEL3, RHEL4, 32 and 64 bit). I''ll do some more timings and will report back. Steve _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, 2 Jan 2009, John Madden wrote:> On Thursday 01 January 2009 08:15:09 am morten@nidelven-it.no wrote: >> We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a >> couple of terabytes of RAIDed SATA storage. > > I''d worry about your I/O with that setup if you plan to do anything > disk-intensive. If you''re already spending on the 64GB RAM, buy SAS.I second this recommendation. I went with SATA myself for cost reasons (I got 750 GB enterprise SATA drives for $120 each). I guess that the best I can say is that SATA gives better price/performance. Steve _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My own experience has been a decent raid controller is more important. If you have multiple disk reads and writes happening from multiple VMs then a good controller like an Areca or 3Ware will put the request in blocks to stop disk thrashing, it can make a big difference. Even raid 1 read performance was better on the Areca SAS card we used compared to the onboard LSI SAS solution because Areca seems to be better at raid 1 read interleaving. A good SAS card also allows you to mix and match SAS and SATA. As a general rule of thumb an enterprise 15K drive will offer 2-3 times the performance of SATA 7.2K drive, the reason is partly because of the spindle speed but also partly due to disk firmwares being geared for server random read / write patterns rather than continuous / read ahead home user loads. We actually mix and match as needed, generally I found SAS is worth it for database and web servers but a waste on web servers or backup NAS. SATA is also available in much larger sizes. The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member Find us in http://www.thebestof.co.uk/petersfield _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Some quick timings, fwiw. For those that missed the season premiere, I am compiling an application consisting of about 500K lines of C and C++ code in 1200 source files with about 100 makefiles, using GNU make. Dom0 is a Dell PE2900 with two quad-core 2.33 GHz processors and 24 GB memory; all file systems are RAID-1 SATA 7.2K rpm; xen is 3.0.3. Sources are read from an NFS mount in each case, where Dom0 is the NFS server (except for the Dom0 test itself, which is local); objects are written to a different NFS mount. Dependencies were evaluated in an earlier step. Timings (min:sec elapsed) are the average of two runs (from "time make"); everything was idle (except for the guest) during each run. CPU utilization in the guest, as reported by "time", was about 85% in each case. Each guest has 1 VCPU and 512 MB memory. For HVM, each qemu-dm on Dom0 consumed between 10% and 25% of one core while compiling. The ccache was cleared before each run, where applicable. There were no Linux PV guests without NFS root. CentOS 5.2 (dom0), x86_64, 2.6.18-92.1.18.el5xen, gcc 4.1.2 4:58 Fedora 6, i686, 2.6.20, gcc 4.1.2, PV, NFS root 5:52 Fedora 6, x86_64, 2.6.20, gcc 4.1.2, PV, NFS root 6:39 Fedora 9, i686, 2.6.25.3, gcc 4.3.0, PV, NFS root 6:21 Fedora 9, x86_64, 2.6.25.3, gcc 4.3.0, PV, NFS root 7:09 RHEL4, i686, 2.6.9-78.0.8.EL, gcc 3.4.6, HVM (file-based) 11:03 RHEL4, x86_64, 2.6.9-78.0.8.EL, gcc 3.4.6, HVM (file-based) 12:25 A re-run on Fedora 9 using the ccache''d objects reduced the time to 3:17 (64-bit) and 3:05 (32-bit). Compilation on Dom0 using -j4 gives a time of 1:28. I have not tried a guest with more than one VCPU (will do so in another episode). Each 32-bit build is 12% faster than the corresponding 64-bit build, PV or HVM. I cannot compare these timings with the Windows timings, since the software being compiled is not the same. Timings for running another small-memory CPU-bound application (no I/O), relative to the Dom0 performance: Dom0 1.00 Fedora 6, x86_64 0.99 Fedora 6, i686 0.88 Fedora 9, x86_64 1.00 Fedora 9, i686 0.85 RHEL4, x86_64 0.95 RHEL4, i686 0.80 Steve _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
<morten@nidelven-it.no>
2009-Jan-03 22:19 UTC
Re: [Xen-users] Large server, Xen limitations
On Fri, 2 Jan 2009 10:00:40 -0500 (EST), Steve Thompson <smt@vgersoft.com> wrote:> On Fri, 2 Jan 2009, John Madden wrote: > >> On Thursday 01 January 2009 08:15:09 am morten@nidelven-it.no wrote: >>> We''re talking about a 4 x Quad core CPU server with 64 GBs of RAM and a >>> couple of terabytes of RAIDed SATA storage. >> >> I''d worry about your I/O with that setup if you plan to do anything >> disk-intensive. If you''re already spending on the 64GB RAM, buy SAS. > > I second this recommendation. I went with SATA myself for cost reasons (I > got 750 GB enterprise SATA drives for $120 each). I guess that the best I > can say is that SATA gives better price/performance.Right, thanks for the tip. :-) We need a lot of storage space, and believe the applications will benefit more from available memory (the database is cached for reads in memory, and unpacking the objects from the database is rather CPU-, if not disk-intensive) I see there are some 1TB "Near-line SAS" disks available, does anyone have experience with those disks vs. regular SAS or SATA disks? -Morten (Re-sending this as well, as the xen-users spam filter seems a bit overzealous) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Jan 2, 2009, at 4:07 PM, Robert Dunkley wrote: <snip>> We actually mix and match as needed, generally I found SAS is worth it > for database and web servers but a waste on web servers or backup NAS. > SATA is also available in much larger sizes.hi Robert, i really appreciate your comments and observations regarding SAS vs SATA and the importance of the controller, but can you please clarify this last bit about which resources SAS disks are wasted on? regards, mark+ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
mark garey wrote:>On Jan 2, 2009, at 4:07 PM, Robert Dunkley wrote: ><snip> >>We actually mix and match as needed, generally I found SAS is worth it >>for database and web servers but a waste on web servers or backup NAS. >>SATA is also available in much larger sizes.>i really appreciate your comments and observations regarding SAS vs >SATA and the importance of the controller, but can you please clarify this >last bit about which resources SAS disks are wasted on?SAS is "SCSI over a serial interface", and does all the stuff SCSI has been chosen for over the years - including features like command queuing/re-ordering and detach/re-attach. So the controller can throw a load of read and write commands at the drive, and the drive can re-order them for efficiency and report success/failure for each request as it''s actually completed. Not applicable to SAS, but parallel SCSI allows the controller to (for example) ask a drive to fetch some blocks of data, the drive then ''detaches'' from the bus, and the controller can do other commands with other drives on the bus while the first drive is seeking to fetch the data requested. SATA is "ATA over a serial interface", and while ATA has been getting more intelligent over the iterations, it doesn''t have some of the performance functionality of SCSI. Also, ATA/SATA drives have traditionally been ''lower spec'' drives than SCSI/SAS drives. So if you need performance, then SAS is the drive of choice. If performance isn''t as important, but you want capacity, then choose SATA. A busy database engine puts heavy demands on the storage system with a lot of random reads and writes. For these applications, SAS drives are preferred as they normally have higher performance. On the other hand, web servers tend to be read only on the file system, and also tend to read files in one go (and in clusters of files). For these, the benefits of the high random I/O performance from SAS drives are not required and cheaper SATA drives may well suffice. Similarly, random I/O performance of a NAS backup server isn''t an issue - but space is. So SATA would be the logical choice. Most good SAS controllers support SATA drives, this is part of the spec. So a box with a good SAS controller can have SATA hard drives installed (or SATA CD/DVD if required, which aren''t available as SAS). The reverse is NOT true, SAS drives are NOT supported on SATA controllers. The SAS and SATA connectors are similar, but keyed so that the above valid combinations are possible. Just another of those details to consider when speccing up a server ! -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Steve Thompson <smt@vgersoft.com> writes:> - With a large number of guests, you have to be more careful with > their start/stop/start order to avoid the ''out of memory'' error.Can you elaborate on this? I have a 8Gb server running a dozen or so VMs with combined memory usage (as reported by xm list) never over 6Gb and been getting strange intermittent out of memory errors. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Aleksandar Ivanisevic wrote:> Steve Thompson <smt@vgersoft.com> writes: > >> - With a large number of guests, you have to be more careful with >> their start/stop/start order to avoid the ''out of memory'' error. > > Can you elaborate on this? > > I have a 8Gb server running a dozen or so VMs with combined memory > usage (as reported by xm list) never over 6Gb and been getting strange > intermittent out of memory errors.Are you doing any operations on XenD? Or are you keeping your fingers of the box? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Aleksandar Ivanisevic
2009-Jan-12 16:26 UTC
Re: [Xen-users] Re: Large server, Xen limitations
Stefan de Konink wrote:> Aleksandar Ivanisevic wrote: >> Steve Thompson <smt@vgersoft.com> writes: >> >>> - With a large number of guests, you have to be more careful with >>> their start/stop/start order to avoid the ''out of memory'' error. >> >> Can you elaborate on this? >> I have a 8Gb server running a dozen or so VMs with combined memory >> usage (as reported by xm list) never over 6Gb and been getting strange >> intermittent out of memory errors. > > Are you doing any operations on XenD? Or are you keeping your fingers of > the box?This is a dedicated xen server. Practically nothing except xm doesnt get executed there. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Aleksandar Ivanisevic wrote:> Stefan de Konink wrote: >> Aleksandar Ivanisevic wrote: >>> Steve Thompson <smt@vgersoft.com> writes: >>> >>>> - With a large number of guests, you have to be more careful with >>>> their start/stop/start order to avoid the ''out of memory'' error. >>> >>> Can you elaborate on this? >>> I have a 8Gb server running a dozen or so VMs with combined memory >>> usage (as reported by xm list) never over 6Gb and been getting strange >>> intermittent out of memory errors. >> >> Are you doing any operations on XenD? Or are you keeping your fingers >> of the box? > > This is a dedicated xen server. Practically nothing except xm doesnt > get executed there.That is exactly the issue I''m talking about. How often do you save/migrate? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
This assumes you use memory ballooning and moving memory allocations, the problem does not occur with fix ram allocations although you need quite alot of RAM to run fixed. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Stefan de Konink Sent: 12 January 2009 16:42 To: aleksandar@ivanisevic.de Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] Re: Large server, Xen limitations Aleksandar Ivanisevic wrote:> Stefan de Konink wrote: >> Aleksandar Ivanisevic wrote: >>> Steve Thompson <smt@vgersoft.com> writes: >>> >>>> - With a large number of guests, you have to be more careful with >>>> their start/stop/start order to avoid the ''out of memory'' error. >>> >>> Can you elaborate on this? >>> I have a 8Gb server running a dozen or so VMs with combined memory >>> usage (as reported by xm list) never over 6Gb and been gettingstrange>>> intermittent out of memory errors. >> >> Are you doing any operations on XenD? Or are you keeping your fingers>> of the box? > > This is a dedicated xen server. Practically nothing except xm doesnt > get executed there.That is exactly the issue I''m talking about. How often do you save/migrate? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member Find us in http://www.thebestof.co.uk/petersfield _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Stefan de Konink <stefan@konink.de> writes:> Aleksandar Ivanisevic wrote: >> Stefan de Konink wrote: >>> Aleksandar Ivanisevic wrote: >>>> Steve Thompson <smt@vgersoft.com> writes: >>>> >>>>> - With a large number of guests, you have to be more careful with >>>>> their start/stop/start order to avoid the ''out of memory'' error. >>>> >>>> Can you elaborate on this? >>>> I have a 8Gb server running a dozen or so VMs with combined memory >>>> usage (as reported by xm list) never over 6Gb and been getting strange >>>> intermittent out of memory errors. >>> >>> Are you doing any operations on XenD? Or are you keeping your >>> fingers of the box? >> >> This is a dedicated xen server. Practically nothing except xm >> doesnt get executed there. > > That is exactly the issue I''m talking about. How often do you save/migrate?Not very often, but as far as I can remember, the problem does occur more often after a few migrations. Is this something documented, reported? Can it be avoided somehow? -- To sto si frustriran, zavidan tko zna na cemu i sto ne vidis dalje od svoje guzice je tuzno. Da onda barem imas toliko samokontrole da sutis umjesto da pravis budalu od sebe... izgleda da si prestar da se promjenis na bolje. - Davor Pasaric, hr.comp.mac _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Aleksandar Ivanisevic <aleksandar@ivanisevic.de> writes:> I have a 8Gb server running a dozen or so VMs with combined memory > usage (as reported by xm list) never over 6Gb and been getting strange > intermittent out of memory errors.make sure dom0-min-mem in /etc/xen/xend-config.sxp is set to something sane, I use 512 on my 8G servers and 1024 on my 32G servers. Anything less, depending on your linux distro and how much you are doing in the Dom0, can give you out of memory errors. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke S Crawford <lsc@prgmr.com> writes:> Aleksandar Ivanisevic <aleksandar@ivanisevic.de> writes: >> I have a 8Gb server running a dozen or so VMs with combined memory >> usage (as reported by xm list) never over 6Gb and been getting strange >> intermittent out of memory errors. > > make sure dom0-min-mem in /etc/xen/xend-config.sxp is set to something > sane, I use 512 on my 8G servers and 1024 on my 32G servers. Anything > less, depending on your linux distro and how much you are doing in the Dom0, > can give you out of memory errors.I already have dom0_mem=768M in kernel params to limit the dom0 memory usage, why would I want to set the minimum to 1G? I am not doing anything in dom0, this is a dedicated xen server, with 10 PV machines dom0 mem usage never crosses 0.5G. Is there a leak or something? -- To sto si frustriran, zavidan tko zna na cemu i sto ne vidis dalje od svoje guzice je tuzno. Da onda barem imas toliko samokontrole da sutis umjesto da pravis budalu od sebe... izgleda da si prestar da se promjenis na bolje. - Davor Pasaric, hr.comp.mac _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Luke S Crawford <lsc@prgmr.com> writes: > >> Aleksandar Ivanisevic <aleksandar@ivanisevic.de> writes: >>> I have a 8Gb server running a dozen or so VMs with combined memory >>> usage (as reported by xm list) never over 6Gb and been getting strange >>> intermittent out of memory errors. >> >> make sure dom0-min-mem in /etc/xen/xend-config.sxp is set to something >> sane, I use 512 on my 8G servers and 1024 on my 32G servers. Anything >> less, depending on your linux distro and how much you are doing in the >> Dom0, >> can give you out of memory errors. > > I already have dom0_mem=768M in kernel params to limit the dom0 memory > usage, why would I want to set the minimum to 1G? > > I am not doing anything in dom0, this is a dedicated xen server, with > 10 PV machines dom0 mem usage never crosses 0.5G. > > Is there a leak or something? >I''ve always found Xen is the most stable when you set dom0_mem and dom0_min_mem to teh same value. Basically it disables the balloon driver for Dom0. Like you said, Dom0 shouldn''t be doing alot that requires fuctuating memory so setting it to a fixed value should not hurt. Of course that is just my opinion. Ryan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users