I am trying to move a workload from bare metal on to a Xen VM. Prior to doing that I decided to do some performance benchmarks using Hammerora (http://hammerora.sourceforge.net/). The following is the details of the configuration/results of an experiment to determine the overhead of XEN. The tests were run on the same hardware. I am seeing a high overhead with Xen. I realize there will be a IO penalty so I moved all the redo logs in to memory. I do understand there will be a VM scheduling penalty with Xen but should it be in the 30-40% range? Does anyone have any insight in to this? Config 1: - H/W: 16GB memory, 4 CPU (Intel(R) Xeon(R) CPU E5606 @ 2.13GHz) - Host OS: Ubuntu 11.04 (Linux 3.0.0) with XEN 4.1.2 - Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus - Guest Software: HammerOra 2.9 (Load Testing Tool) against Oracle 11.2.0.1.0 - Setup: a) Dom 0 running 4 vcpus b) 1 Guest running Oracle 11.2.0.1.0 with 12 GB memory and 4 vcpus c) 1 Guest running HammerOra 2.9 with 1 GB memory and 2 vcpus Note 1: None of the vcpus are pinned. Note 2: The Oracle cache size is sufficiently large and the redo logs are in memory (tmpfs) so there is negligible I/O (confirmed via AWR reports) - Result: 1 vusers: Oracle Txns/minute: 26769 2 vusers: Oracle Txns/minute: 47878 Config 2: - H/W: 12 GB memory (memory was reduced using the kernel command line), 4 CPU (Intel(R) Xeon(R) CPU E5606 @ 2.13GHz) - OS: CentOS 5.6 (Linux 2.6.18-238.el5) - Software: HammerOra 2.9 (Load Testing Tool) against Oracle 11.2.0.1.0 - Note 1: The Oracle cache size is sufficiently large and the redo logs are in memory (tmpfs) so there is negligible I/O (confirmed via AWR reports) - Result: 1 vusers: Oracle Txns/minute: 39840 2 vusers: Oracle Txns/minute: 77423 Result: - 1 vuser: 33% overhead due to XEN - 2 vuser: 39% overhead due to XEN Thanks, AP
On Fri, Mar 16, 2012 at 8:00 AM, AP <apxeng@gmail.com> wrote:> Config 1:> b) 1 Guest running Oracle 11.2.0.1.0 with 12 GB memory and 4 vcpus > c) 1 Guest running HammerOra 2.9 with 1 GB memory and 2 vcpus> Config 2: > - H/W: 12 GB memory (memory was reduced using the kernel command > line), 4 CPU (Intel(R) Xeon(R) CPU E5606 @ 2.13GHz) > - OS: CentOS 5.6 (Linux 2.6.18-238.el5) > - Software: HammerOra 2.9 (Load Testing Tool) against Oracle 11.2.0.1.0So in config 1 you put the benchmark in different domU? To narrow down possible problems, I suggest you put it in the same domU first, so the config will be identical to bare metal. Also make sure that both bare metal and the domU are running from the same block device (kinda tricky, but possible if you use UUID or LABEL). -- Fajar
On Thu, Mar 15, 2012 at 8:21 PM, Fajar A. Nugraha <list@fajar.net> wrote:> On Fri, Mar 16, 2012 at 8:00 AM, AP <apxeng@gmail.com> wrote: >> Config 1: > >> b) 1 Guest running Oracle 11.2.0.1.0 with 12 GB memory and 4 vcpus >> c) 1 Guest running HammerOra 2.9 with 1 GB memory and 2 vcpus > > >> Config 2: >> - H/W: 12 GB memory (memory was reduced using the kernel command >> line), 4 CPU (Intel(R) Xeon(R) CPU E5606 @ 2.13GHz) >> - OS: CentOS 5.6 (Linux 2.6.18-238.el5) >> - Software: HammerOra 2.9 (Load Testing Tool) against Oracle 11.2.0.1.0 > > So in config 1 you put the benchmark in different domU? > > To narrow down possible problems, I suggest you put it in the same > domU first, so the config will be identical to bare metal. Also makeI tried that and it made no difference.> sure that both bare metal and the domU are running from the same block > device (kinda tricky, but possible if you use UUID or LABEL).Yeah, that is a little tricky for me to pull off though I am using identical disks for the both tests. Plus I don''t know how much that will help as most of the stuff was in memory (tmpfs) and hence IO shouldn''t be that much of an issue. So what I am concerned about is the heavy 30-40% hit with Xen being predominantly CPU and memory bound. Thank, AP
On Fri, Mar 16, 2012 at 11:03 AM, AP <apxeng@gmail.com> wrote:> Yeah, that is a little tricky for me to pull off though I am using > identical disks for the both tests. Plus I don''t know how much that > will help as most of the stuff was in memory (tmpfs) and hence IO > shouldn''t be that much of an issue. So what I am concerned about is > the heavy 30-40% hit with Xen being predominantly CPU and memory > bound.I wonder if this is because the vanilla pv_ops kernel. It''d be great if you can repeat the setup with a centos5 dom0, just to be sure. If it DOES perform better, then something in the new upstream kernel is causing the penalty, in which case xen-devel would probably be the more appropriate place to ask. -- Fajar
Hi, questions: Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus Is it 2.6.18-238.el5 or 2.6.18-238.el5xen? Can you try to do a cpu pinning for all vcpus onto real cpus? Florian 2012/3/16 AP <apxeng@gmail.com>:> I am trying to move a workload from bare metal on to a Xen VM. Prior > to doing that I decided to do some performance benchmarks using > Hammerora (http://hammerora.sourceforge.net/). > The following is the details of the configuration/results of an > experiment to determine the overhead of XEN. The tests were run on the > same hardware. I am seeing a high overhead with Xen. I realize there > will be a IO penalty so I moved all the redo logs in to memory. I do > understand there will be a VM scheduling penalty with Xen but should > it be in the 30-40% range? Does anyone have any insight in to this?Florian -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs.
On Sat, Mar 17, 2012 at 4:51 PM, Florian Heigl <florian.heigl@gmail.com> wrote:> Hi, > > questions: > > Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus > Is it 2.6.18-238.el5 or 2.6.18-238.el5xen?It was 2.6.18-238.el5. Should I be running with el5xen?> Can you try to do a cpu pinning for all vcpus onto real cpus?Pinning CPUs did not make much of a difference.> Florian > > 2012/3/16 AP <apxeng@gmail.com>: >> I am trying to move a workload from bare metal on to a Xen VM. Prior >> to doing that I decided to do some performance benchmarks using >> Hammerora (http://hammerora.sourceforge.net/). >> The following is the details of the configuration/results of an >> experiment to determine the overhead of XEN. The tests were run on the >> same hardware. I am seeing a high overhead with Xen. I realize there >> will be a IO penalty so I moved all the redo logs in to memory. I do >> understand there will be a VM scheduling penalty with Xen but should >> it be in the 30-40% range? Does anyone have any insight in to this? > > Florian > > > -- > the purpose of libvirt is to provide an abstraction layer hiding all > xen features added since 2006 until they were finally understood and > copied by the kvm devs.
On Tue, Mar 20, 2012 at 1:38 AM, AP <apxeng@gmail.com> wrote:> On Sat, Mar 17, 2012 at 4:51 PM, Florian Heigl <florian.heigl@gmail.com> wrote: >> Hi, >> >> questions: >> >> Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus >> Is it 2.6.18-238.el5 or 2.6.18-238.el5xen? > > It was 2.6.18-238.el5. Should I be running with el5xen?Do you know the difference between PV and HVM? If you use 2.6.18-238.el5, most likely you''re using HVM domU. Don''t be surprised if it''s much slower. -- Fajar
On Mon, Mar 19, 2012 at 9:49 PM, Fajar A. Nugraha <list@fajar.net> wrote:> On Tue, Mar 20, 2012 at 1:38 AM, AP <apxeng@gmail.com> wrote: >> On Sat, Mar 17, 2012 at 4:51 PM, Florian Heigl <florian.heigl@gmail.com> wrote: >>> Hi, >>> >>> questions: >>> >>> Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus >>> Is it 2.6.18-238.el5 or 2.6.18-238.el5xen? >> >> It was 2.6.18-238.el5. Should I be running with el5xen? > > Do you know the difference between PV and HVM?Ah yes. I will fix that. Thanks for pointing that out.> If you use 2.6.18-238.el5, most likely you''re using HVM domU. Don''t be > surprised if it''s much slower. > > -- > Fajar
On Mon, Mar 19, 2012 at 9:51 PM, AP <apxeng@gmail.com> wrote:> On Mon, Mar 19, 2012 at 9:49 PM, Fajar A. Nugraha <list@fajar.net> wrote: >> On Tue, Mar 20, 2012 at 1:38 AM, AP <apxeng@gmail.com> wrote: >>> On Sat, Mar 17, 2012 at 4:51 PM, Florian Heigl <florian.heigl@gmail.com> wrote: >>>> Hi, >>>> >>>> questions: >>>> >>>> Guest OS: CentOS 5.6 (Linux 2.6.18-238.el5) with 12 GB memory and 4 vcpus >>>> Is it 2.6.18-238.el5 or 2.6.18-238.el5xen? >>> >>> It was 2.6.18-238.el5. Should I be running with el5xen? >> >> Do you know the difference between PV and HVM? > > Ah yes. I will fix that. Thanks for pointing that out.Thinking about it... PV on HVM drivers only help when IO is involved. It is not going to help with VMX CPU overhead and most of the benchmark is being run in memory. But I will give it a shot anyhow...>> If you use 2.6.18-238.el5, most likely you''re using HVM domU. Don''t be >> surprised if it''s much slower. >> >> -- >> Fajar
On Tue, Mar 20, 2012 at 11:54 AM, AP <apxeng@gmail.com> wrote:>>>> It was 2.6.18-238.el5. Should I be running with el5xen? >>> >>> Do you know the difference between PV and HVM? >> >> Ah yes. I will fix that. Thanks for pointing that out. > > Thinking about it... PV on HVM drivers only help when IO is involved. > It is not going to help with VMX CPU overhead and most of the > benchmark is being run in memory. But I will give it a shot anyhow...Correct. Which is why I didn''t suggest installing PV drivers. You need to use PV domU instead, which doesn''t have the CPU overhead of having to rely on hardware-assisted virtualization. -- Fajar