Hi I am a student at the University College of Oslo and have in one experiment tried to test the performance of Xenolinux compared to Native linux on Debian lenny 2.6.26-2 kernel. bonnie++ the disk IO bechmark program was used among other tests like cpu intensive scripts. The goal is to prove the statement (Once again) that Xen adds only an overhead of maximum 8%. The Results of bonnie++ have been surprizing and I want to explain them. It shows that for some types of Disk IO, like sequential delete and random create Xen performs faster than native linux. A full comparison chart is attached as a pdf. The test are run first from native linux than both from Dom0 and a DomU with the same results. Can you please help me understand these results, How can Xenolinux perform faster although only for some types of operations, then native linux? Plese forgive me If Im missing something completely obvious. If you need any further info. please just send me an email and it will be provided. Hope to hear from you soon Amir Ahmed _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Do you have the storage for the Xen VM in a file or in a volume? File based storage can cache to Ram in Dom0 causing very high disk results. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Amir Maqbool Ahmed Sent: 24 May 2009 11:12 To: xen-users@lists.xensource.com Subject: [Xen-users] Xen Performance Hi I am a student at the University College of Oslo and have in one experiment tried to test the performance of Xenolinux compared to Native linux on Debian lenny 2.6.26-2 kernel. bonnie++ the disk IO bechmark program was used among other tests like cpu intensive scripts. The goal is to prove the statement (Once again) that Xen adds only an overhead of maximum 8%. The Results of bonnie++ have been surprizing and I want to explain them. It shows that for some types of Disk IO, like sequential delete and random create Xen performs faster than native linux. A full comparison chart is attached as a pdf. The test are run first from native linux than both from Dom0 and a DomU with the same results. Can you please help me understand these results, How can Xenolinux perform faster although only for some types of operations, then native linux? Plese forgive me If Im missing something completely obvious. If you need any further info. please just send me an email and it will be provided. Hope to hear from you soon Amir Ahmed The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member Find us in http://www.thebestof.co.uk/petersfield _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, are you comparing a Xen mounted on LVM with a "normal" FS not mounted on LVM ? As LVM disable barrier, there can be a performance improvement. Olivier Amir Maqbool Ahmed a écrit :> Hi > > I am a student at the University College of Oslo and have in one experiment > tried to test the performance of Xenolinux compared to Native linux on Debian lenny > 2.6.26-2 kernel. > > bonnie++ the disk IO bechmark program was used among other tests like cpu intensive scripts. > The goal is to prove the statement (Once again) that Xen adds only an overhead of maximum 8%. > > The Results of bonnie++ have been surprizing and I want to explain them. > It shows that for some types of Disk IO, like sequential delete and random create > Xen performs faster than native linux. A full comparison chart is attached as a pdf. > The test are run first from native linux than both from Dom0 and a DomU with the same results. > > Can you please help me understand these results, How can Xenolinux perform faster > although only for some types of operations, then native linux? > > Plese forgive me If Im missing something completely obvious. > If you need any further info. please just send me an email and it will be provided. > > Hope to hear from you soon > > Amir Ahmed > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> It shows that for some types of Disk IO, like sequential delete and > random create Xen performs faster than native linux. A full comparison > chart is attached as a pdf. > The test are run first from native linux than both from Dom0 and a DomU > with the same results. >Hi Amir, Not an authoritative answer here... but I''ve see similar results just running "hdparm" to see IO rates. I''ve also seen in the past some amount of clock skew between host & guest OS so it could be that the (possible) clock skew is causing your benchmark to show higher than possible results... if your guest OS clock is even a little slow it will effect results especially in short-term tests. Joe _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, May 24, 2009 at 5:12 AM, Amir Maqbool Ahmed <AmirM.Ahmed@stud.iu.hio.no> wrote:> It shows that for some types of Disk IO, like sequential delete and random create > Xen performs faster than native linux.let me guess: you''re running your DomU''s images as files, and the bonnie++ testsize is bigger than DomU''s RAM, but less than Dom0''s RAM. right? it''s an illusion created by the file-level caching at Dom0. try bigger a testsize and it should disappear. in fact, with real-life loads it''s the slowest config, with better performance when you use lower-level backends (LVM, or tap:) that skip the cache. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, May 24, 2009 at 12:12:23PM +0200, Amir Maqbool Ahmed wrote:> Hi > > I am a student at the University College of Oslo and have in one experiment > tried to test the performance of Xenolinux compared to Native linux on Debian lenny > 2.6.26-2 kernel. > > bonnie++ the disk IO bechmark program was used among other tests like cpu intensive scripts. > The goal is to prove the statement (Once again) that Xen adds only an overhead of maximum 8%. > > The Results of bonnie++ have been surprizing and I want to explain them. > It shows that for some types of Disk IO, like sequential delete and random create > Xen performs faster than native linux. A full comparison chart is attached as a pdf. > The test are run first from native linux than both from Dom0 and a DomU with the same results. > > Can you please help me understand these results, How can Xenolinux perform faster > although only for some types of operations, then native linux? > > Plese forgive me If Im missing something completely obvious. > If you need any further info. please just send me an email and it will be provided. > > Hope to hear from you soon >Hi, Usually this is caused by using file backed disks for domU, so dom0 kernel file cache is giving this boost.. Change to LVM volumes or tap:aio instead of file, because they don''t have dom0 (file) caching.. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
The 8% is a dangerous over-simplification, and represents simply the results of a particular well-designed study. Clearly there are situations where the overhead of using Xen is much higher than this. On May 24, 2009, at 6:12 AM, Amir Maqbool Ahmed wrote:> Hi > > I am a student at the University College of Oslo and have in one > experiment > tried to test the performance of Xenolinux compared to Native linux > on Debian lenny > 2.6.26-2 kernel. > > bonnie++ the disk IO bechmark program was used among other tests > like cpu intensive scripts. > The goal is to prove the statement (Once again) that Xen adds only > an overhead of maximum 8%. > > The Results of bonnie++ have been surprizing and I want to explain > them. > It shows that for some types of Disk IO, like sequential delete and > random create > Xen performs faster than native linux. A full comparison chart is > attached as a pdf. > The test are run first from native linux than both from Dom0 and a > DomU with the same results. > > Can you please help me understand these results, How can Xenolinux > perform faster > although only for some types of operations, then native linux? > > Plese forgive me If Im missing something completely obvious. > If you need any further info. please just send me an email and it > will be provided. > > Hope to hear from you soon > > Amir Ahmed > <io-graph.pdf>_______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, May 28, 2009 at 11:57 AM, Peter Booth <peter_booth@mac.com> wrote:> > The 8% is a dangerous over-simplification, and represents simply the results > of a particular well-designed study. > Clearly there are situations where the overhead of using Xen is much higher > than this.but not as measured on Dom0, as i think the original question was -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
The original question doesn''t say "as measured on Dom0", and appears to reference the well known study performed at Cambridge University and replicated at Clarkson University. That study used a handful of benchmarks and compared throughput across a number of virtualization platforms. It found that, for those tests, Xen throughput was, at most, only 8% worse than native Linux. The issue for our community, however, is that it is human nature to use the "at most 8% worse" as a data-point for Xen performance. But throughput is not the whole picture. Many Xen installations are hosting user-facing web applications where response time is much more important than throughput. Xen often increases the variability of response times. One real world example: native Linux: page response times of ( 400ms/150ms) [mean/standard deviation] Xen VM: page response times of ( 700ms/3.5s) [mean/standard deviation] In this scenario, we have mean response times that are almost 100% worse, and the 90th percentile is 1000% worse. Peter Booth On May 28, 2009, at 1:08 PM, Javier Guerra wrote:> On Thu, May 28, 2009 at 11:57 AM, Peter Booth <peter_booth@mac.com> > wrote: >> >> The 8% is a dangerous over-simplification, and represents simply >> the results >> of a particular well-designed study. >> Clearly there are situations where the overhead of using Xen is >> much higher >> than this. > > but not as measured on Dom0, as i think the original question was > > -- > Javier > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Peter Booth <peter_booth@mac.com> writes:> One real world example: > native Linux: page response times of ( 400ms/150ms) > [mean/standard deviation] > Xen VM: page response times of ( 700ms/3.5s) > [mean/standard deviation] > > In this scenario, we have mean response times that are almost 100% > worse, and the 90th percentile is 1000% worse.Yeah, but if your average native times are 400ms, you are very likely already hitting swap and experiencing what I would call unacceptable performance. It''s no suprise that swap on a shared device is going to be slower than a non-shared swap. Hm. unless that xen vm is on a xen box doing nothing else, in which case the results whould suprise me quite a lot (unless you were using HVM mode or had less ram in the xen vm than in the native box) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
In this example, swapping/page scanning was not an issue. On May 30, 2009, at 4:05 PM, Luke S Crawford wrote:> Peter Booth <peter_booth@mac.com> writes: >> One real world example: >> native Linux: page response times of ( 400ms/150ms) >> [mean/standard deviation] >> Xen VM: page response times of ( 700ms/3.5s) >> [mean/standard deviation] >> >> In this scenario, we have mean response times that are almost 100% >> worse, and the 90th percentile is 1000% worse. > > Yeah, but if your average native times are 400ms, you are very likely > already hitting swap and experiencing what I would call unacceptable > performance. It''s no suprise that swap on a shared device is going > to be > slower than a non-shared swap. > > Hm. unless that xen vm is on a xen box doing nothing else, in which > case > the results whould suprise me quite a lot (unless you were using HVM > mode > or had less ram in the xen vm than in the native box)_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke, Apologies for my first incomplete reply. Here''s more context. The VMs weren''t page scanning. They did show non- trivial %steal (where non-trivial is > 1%) These VMs are commercially hosted on five quad core hosts with approx 14 VMs per host and just under 1GB RAM per VM. Thats not a lot of memory, but then the workload of one nginx and three mongrels per VM is comfortably under 512MB of RSS. I have heard numerous mentions of similar behavior from users of other utility platforms . There is a recent (Feb 2009) report by IBM that also describes this behavior once #domU exceeds six. My point, however, is that Xen performance is not well understood in general, and there are situations where virtualization doesnt perform well. Peter On May 30, 2009, at 4:11 PM, Peter Booth wrote:> In this example, swapping/page scanning was not an issue. > > On May 30, 2009, at 4:05 PM, Luke S Crawford wrote: > >> Peter Booth <peter_booth@mac.com> writes: >>> One real world example: >>> native Linux: page response times of ( 400ms/150ms) >>> [mean/standard deviation] >>> Xen VM: page response times of ( 700ms/3.5s) >>> [mean/standard deviation] >>> >>> In this scenario, we have mean response times that are almost 100% >>> worse, and the 90th percentile is 1000% worse. >> >> Yeah, but if your average native times are 400ms, you are very likely >> already hitting swap and experiencing what I would call unacceptable >> performance. It''s no suprise that swap on a shared device is going >> to be >> slower than a non-shared swap. >> >> Hm. unless that xen vm is on a xen box doing nothing else, in >> which case >> the results whould suprise me quite a lot (unless you were using >> HVM mode >> or had less ram in the xen vm than in the native box) > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Michael David Crawford <mdc@prgmr.com> writes:> Why do HVMs perform so poorly under Xen? My experience so far is that > they go at less then five percent of the speed they would on a > physical machine.Because without paravirtualized drivers, all I/O goes through what is essentially quemu. Slow. Five percent seems a bit slow for that; but the I/O degridation is pretty significant.> A typical example of what I am experiencing is that with a single 2.5 > GHz Xeon vCPU and 512 MB of RAM, BeOS 5 Pro takes about fifteen > minutes to boot to the desktop.> But when installed directly on a 233 MHz Pentium II, getting to the > desktop takes less than a minute.There is weirdness with some other operating systems; sometimes they depend on some hardware feature that is normally not in the critcal path so they end up slow or not working at all. For a while, FreeBSD had all kinds of trouble running under HVM mode. (It works fine these days, of course, I/O is still slow. ) running xenoprofile would probably be enlightening, but I don''t know if that is too much work, if it works fine under other virtualization systems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Peter Booth <peter_booth@mac.com> writes:> Here''s more context. The VMs weren''t page scanning. They did show non- > trivial %steal (where non-trivial is > 1%) > These VMs are commercially hosted on five quad core hosts with approx > 14 VMs per host and just under 1GB RAM per VM. Thats not a lot of > memory, but then the workload of one nginx and three mongrels per VM > is comfortably under 512MB of RSS.I guess I don''t know much about mongrel, but if someone was complaining to me about performance of a modern web application in an image with only 1GB ram, CPU would not be the first thing I''d look at. so steal was >1%? what was idle? what was iowait? if steal was only 10% and iowait was 50%, I''d still add more ram before I added more CPU. (see, more ram, if it''s not required by the application, will be used as disk cache, and in most cases help to mitigate slow or overused disk.)> I have heard numerous mentions of similar behavior from users of other > utility platforms . There is a recent (Feb 2009) report by IBM that > also describes this behavior once #domU exceeds six.Yeah, about a month ago I had a customer complaining about this, wanting more CPU. I talked him into getting more ram (based on his iowait numbers) and his performance improved. Disk is orders of magnitude slower than just about anything else (besides maybe network) so whenever you can exchange disk access for ram access, you see dramatic performance improvements.> My point, however, is that Xen performance is not well understood in > general, and there are situations where virtualization doesn''t perform > well.>From what I have seen, the overhead of using phy:// disks is pretty small,when you are the only VM trying to access the disk, but having a bunch of other guys hitting the same disk as you are can really slow you down. - it seems to me that it is turning all your sequential access into random access. Also note, I''ve seen better worst-case performance by giving each VM fewer VCPUs, and the xen guys are not kidding about dedicating a core to the Dom0. setting cpus="1-7" in your xm config file (assuming an 8 core box) and giving dom0 only 1 vcpu makes a world of difference on heavily loaded boxes. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote:> > > Also note, I''ve seen better worst-case performance by giving each > VM fewer > VCPUs, and the xen guys are not kidding about dedicating a core to the > Dom0. setting cpus="1-7" in your xm config file (assuming an 8 > core box) > and giving dom0 only 1 vcpu makes a world of difference on heavily > loaded boxes. >This may be a dumb question, but is that any different from or the same as setting "dom0-cpus 1" in the xend-config.sxp file? Are you specifying "cpus=1-7" in the guest''s config or the Xen daemon''s config? thanks in advance for clearing up my confusion, ...adam _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Adam Wead <awead@indiana.edu> writes:> On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote: > > > > > > Also note, I''ve seen better worst-case performance by giving each > > VM fewer > > VCPUs, and the xen guys are not kidding about dedicating a core to the > > Dom0. setting cpus="1-7" in your xm config file (assuming an 8 > > core box) > > and giving dom0 only 1 vcpu makes a world of difference on heavily > > loaded boxes. > > > > This may be a dumb question, but is that any different from or the > same as setting "dom0-cpus 1" in the xend-config.sxp file? Are you > specifying "cpus=1-7" in the guest''s config or the Xen daemon''s config?> thanks in advance for clearing up my confusion,setting cpus="1-7" in the guest config file is important, because otherwise the guests will run on all cpus, including the dom0 cpus. I believe the dom0 vcp0 is pinned to cpu0, vcpu1 is pinned to cpu1 .... so setting dom0-cpus 1 will pin the dom0 cpu 0 to 0. just setting dom0-cpus in xend-config.sxp without setting cpus= in the domu configs doesn''t help much, because the guests still trample over the one cpu the dom0 has. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Jun 2, 2009, at 1:59 AM, Luke S Crawford wrote:> Adam Wead <awead@indiana.edu> writes: > >> On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote: >>> >>> >>> Also note, I''ve seen better worst-case performance by giving each >>> VM fewer >>> VCPUs, and the xen guys are not kidding about dedicating a core >>> to the >>> Dom0. setting cpus="1-7" in your xm config file (assuming an 8 >>> core box) >>> and giving dom0 only 1 vcpu makes a world of difference on heavily >>> loaded boxes. >>> >> >> This may be a dumb question, but is that any different from or the >> same as setting "dom0-cpus 1" in the xend-config.sxp file? Are you >> specifying "cpus=1-7" in the guest''s config or the Xen daemon''s >> config? > >> thanks in advance for clearing up my confusion, > > setting cpus="1-7" in the guest config file is important, because > otherwise > the guests will run on all cpus, including the dom0 cpus. > > I believe the dom0 vcp0 is pinned to cpu0, vcpu1 is pinned to > cpu1 .... so > setting dom0-cpus 1 will pin the dom0 cpu 0 to 0. > > just setting dom0-cpus in xend-config.sxp without setting cpus= in > the domu > configs doesn''t help much, because the guests still trample over > the one > cpu the dom0 has.Thanks! So it looks like you have to set both, if you want your Dom0 to be completely guest free. The xend-config.sxp file specifies which cpu to use, and the guest config file keeps the guests away from using it and only all the others. ...adam _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Luke, All On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote:> Peter Booth <peter_booth@mac.com> writes: > >> Here''s more context. The VMs weren''t page scanning. They did show >> non- >> trivial %steal (where non-trivial is > 1%) >> These VMs are commercially hosted on five quad core hosts with approx >> 14 VMs per host and just under 1GB RAM per VM. Thats not a lot of >> memory, but then the workload of one nginx and three mongrels per VM >> is comfortably under 512MB of RSS. > > I guess I don''t know much about mongrel, but if someone was > complaining to me > about performance of a modern web application in an image with only > 1GB ram, > CPU would not be the first thing I''d look at.I look at everything. Yes 1GB is a limitation. The mongrel was configured taking that into account.> so steal was >1%? what was idle? what was iowait? if steal was > only 10% > and iowait was 50%, I''d still add more ram before I added more CPU.Theres no need to discuss hypotheticals. Lets look at real numbers at a busy time: sar -W -f 00:00:01 pswpin/s pswpout/s 00:00:06 0.00 0.00 00:00:11 0.00 0.00 00:00:16 0.00 0.00 00:00:21 0.00 0.00 pswpin/s pswpout/s is equal to zero at all times, in other words, no swapping is occurring. So disk isn''t a factor here. 00:00:01 CPU %user %nice %system %iowait %steal %idle 00:00:06 all 84.42 0.00 6.92 3.08 0.96 4.62 00:00:11 all 92.46 0.00 6.15 0.00 1.19 0.20 00:00:16 all 90.24 0.00 6.37 0.40 2.00 1.00 00:00:21 all 88.42 0.00 8.98 0.00 1.80 0.80 We are clearly CPU starved.> and his performance improved. Disk is orders of magnitude slower than > just about anything else (besides maybe network) so whenever you can > exchange disk access for ram access, you see dramatic performance > improvements.That is not the case. You will only see an improvement if disk access is a bottleneck>> My point, however, is that Xen performance is not well understood in >> general, and there are situations where virtualization doesn''t >> perform >> well.These sar readings on DomU do not tell the whole picture, nor do the studies that show Xen throughput is at worst only 8% worse than native Linux. There are scenarios where the impact of virtualization on user response time can be a factor of 3 or 4. This issue is poorly understood, has been seen and described in research literature, and until we get a handle on it and understand it, it will cause substantial problems. With the increasing popularity of the cloud and virtualized environments, where there is less transparency than a physical environment, we should expect that performance problems will increase. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My first observation would be that I don''t trust any self measured performance values from a VM. There are tricky time usage allocation issues and I''ve seen and heard the 8% claims but didn''t believe the folks knew how to measure the VM behavior w/o trusting the VM. ________________________________ From: Peter Booth [mailto:peter_booth@mac.com] Sent: Sat 6/6/2009 5:44 PM To: Luke S Crawford Cc: xen-users@lists.xensource.com; Amir Maqbool Ahmed; Javier Guerra Subject: Re: [Xen-users] Xen Performance Luke, All On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote: Peter Booth <peter_booth@mac.com> writes: Here''s more context. The VMs weren''t page scanning. They did show non- trivial %steal (where non-trivial is > 1%) These VMs are commercially hosted on five quad core hosts with approx 14 VMs per host and just under 1GB RAM per VM. Thats not a lot of memory, but then the workload of one nginx and three mongrels per VM is comfortably under 512MB of RSS. I guess I don''t know much about mongrel, but if someone was complaining to me about performance of a modern web application in an image with only 1GB ram, CPU would not be the first thing I''d look at. I look at everything. Yes 1GB is a limitation. The mongrel was configured taking that into account. so steal was >1%? what was idle? what was iowait? if steal was only 10% and iowait was 50%, I''d still add more ram before I added more CPU. Theres no need to discuss hypotheticals. Lets look at real numbers at a busy time: sar -W -f 00:00:01 pswpin/s pswpout/s 00:00:06 0.00 0.00 00:00:11 0.00 0.00 00:00:16 0.00 0.00 00:00:21 0.00 0.00 pswpin/s pswpout/s is equal to zero at all times, in other words, no swapping is occurring. So disk isn''t a factor here. 00:00:01 CPU %user %nice %system %iowait %steal %idle 00:00:06 all 84.42 0.00 6.92 3.08 0.96 4.62 00:00:11 all 92.46 0.00 6.15 0.00 1.19 0.20 00:00:16 all 90.24 0.00 6.37 0.40 2.00 1.00 00:00:21 all 88.42 0.00 8.98 0.00 1.80 0.80 We are clearly CPU starved. and his performance improved. Disk is orders of magnitude slower than just about anything else (besides maybe network) so whenever you can exchange disk access for ram access, you see dramatic performance improvements. That is not the case. You will only see an improvement if disk access is a bottleneck My point, however, is that Xen performance is not well understood in general, and there are situations where virtualization doesn''t perform well. These sar readings on DomU do not tell the whole picture, nor do the studies that show Xen throughput is at worst only 8% worse than native Linux. There are scenarios where the impact of virtualization on user response time can be a factor of 3 or 4. This issue is poorly understood, has been seen and described in research literature, and until we get a handle on it and understand it, it will cause substantial problems. With the increasing popularity of the cloud and virtualized environments, where there is less transparency than a physical environment, we should expect that performance problems will increase. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jun 02, 2009 at 08:42:56AM -0400, Adam Wead wrote:> > > On Jun 2, 2009, at 1:59 AM, Luke S Crawford wrote: > > >Adam Wead <awead@indiana.edu> writes: > > > >>On Jun 1, 2009, at 8:43 PM, Luke S Crawford wrote: > >>> > >>> > >>>Also note, I''ve seen better worst-case performance by giving each > >>>VM fewer > >>>VCPUs, and the xen guys are not kidding about dedicating a core > >>>to the > >>>Dom0. setting cpus="1-7" in your xm config file (assuming an 8 > >>>core box) > >>>and giving dom0 only 1 vcpu makes a world of difference on heavily > >>>loaded boxes. > >>> > >> > >>This may be a dumb question, but is that any different from or the > >>same as setting "dom0-cpus 1" in the xend-config.sxp file? Are you > >>specifying "cpus=1-7" in the guest''s config or the Xen daemon''s > >>config? > > > >>thanks in advance for clearing up my confusion, > > > >setting cpus="1-7" in the guest config file is important, because > >otherwise > >the guests will run on all cpus, including the dom0 cpus. > > > >I believe the dom0 vcp0 is pinned to cpu0, vcpu1 is pinned to > >cpu1 .... so > >setting dom0-cpus 1 will pin the dom0 cpu 0 to 0. > > > >just setting dom0-cpus in xend-config.sxp without setting cpus= in > >the domu > >configs doesn''t help much, because the guests still trample over > >the one > >cpu the dom0 has. > > Thanks! > > So it looks like you have to set both, if you want your Dom0 to be > completely guest free. The xend-config.sxp file specifies which cpu > to use, and the guest config file keeps the guests away from using it > and only all the others. > > ...adam >Actually the above is not totally correct.. see this recent thread: http://lists.xensource.com/archives/html/xen-devel/2009-07/msg00873.html and http://lists.xensource.com/archives/html/xen-devel/2009-07/msg00875.html -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users