Hello, I wonder if the performance of a Xen machine can be increased by disabling SMP in the Linux kernel by default, basically having one of the 8 processors tied to dom0. In my scenario I use NFS or iSCSI as file backend. Looking at NFS there will be a lot of tapdrives, while in the iSCSI scenario there is fewer overhead in userspace processes. Could anyone give me a hint on the performance increase or decrease using SMP vs Uniprocessor? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Stefan, On Sat, Jun 14, 2008 at 7:27 PM, Stefan de Konink <skinkie@xs4all.nl> wrote:> Hello, > > > I wonder if the performance of a Xen machine can be increased by disabling > SMP in the Linux kernel by default, basically having one of the 8 processors > tied to dom0. > > In my scenario I use NFS or iSCSI as file backend. Looking at NFS there > will be a lot of tapdrives, while in the iSCSI scenario there is fewer > overhead in userspace processes. > > Could anyone give me a hint on the performance increase or decrease using > SMP vs Uniprocessor? > >In the original "Xen and the Art of Virtualization" they actually disabled SMP and had better IO performance. I don''t know if this is still true. Take a look at: http://research.microsoft.com/~tharris/papers/2003-sosp.pdf www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf Cheers, Todd _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Todd Deshane schreef:> Hi Stefan, > > On Sat, Jun 14, 2008 at 7:27 PM, Stefan de Konink <skinkie@xs4all.nl > <mailto:skinkie@xs4all.nl>> wrote: > > Hello, > > > I wonder if the performance of a Xen machine can be increased by > disabling SMP in the Linux kernel by default, basically having one > of the 8 processors tied to dom0. > > In my scenario I use NFS or iSCSI as file backend. Looking at NFS > there will be a lot of tapdrives, while in the iSCSI scenario there > is fewer overhead in userspace processes. > > Could anyone give me a hint on the performance increase or decrease > using SMP vs Uniprocessor? > > > In the original "Xen and the Art of Virtualization" they actually > disabled SMP and had better IO performance. I don''t know if this is > still true. > > Take a look at: > http://research.microsoft.com/~tharris/papers/2003-sosp.pdf > www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > <http://www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf>Now I guess in 2003 there was no concept like tapdisk yet. I''ll see if I can get a clean benchmark. Of 32 VMs doing the same task, SMP vs non-SMP. Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > I wonder if the performance of a Xen machine can be increased by > > disabling SMP in the Linux kernel by default, basically having one > > of the 8 processors tied to dom0.Devoting a (logical) processor to dom0 can improve IO performance for guests, it''s true. Note that even just dedicating a hyperthread (if you have them) can improve things.> > In my scenario I use NFS or iSCSI as file backend. Looking at NFS > > there will be a lot of tapdrives, while in the iSCSI scenario there > > is fewer overhead in userspace processes. > > > > Could anyone give me a hint on the performance increase or decrease > > using SMP vs Uniprocessor?FYI, XenLinux will automatically optimise itself for UP or SMP operation without you having to recompile. The spinlock operations are patched out if the kernel is booted in UP, or patched in for SMP. Whatsmore, I think this is even done at runtime, so a kernel can SMP-ify or de-SMP-ify itself on the fly (!). I think this might have gone into mainline Linux a while back, actually. I wouldn''t be surprised if it''s actually not possible to run a pure UP dom0 on an SMP system but I don''t know for sure.> > In the original "Xen and the Art of Virtualization" they actually > > disabled SMP and had better IO performance. I don''t know if this is > > still true. > > > > Take a look at: > > http://research.microsoft.com/~tharris/papers/2003-sosp.pdf > > www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > > <http://www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf> > > Now I guess in 2003 there was no concept like tapdisk yet. I''ll see if I > can get a clean benchmark. Of 32 VMs doing the same task, SMP vs non-SMP.Back then dom0 didn''t even handle IO for the domains, it was all done in Xen ;-) Things have moved on quite a long way since then! Worth noting that if the processes in dom0 are just doing IO then they''ll be blocked most of the time, so the performance may depend less on the number of CPUs available to dom0 and more on the regularity of scheduling (i.e. deploying a dom0 with dedicated PCPUs is probably the ultimate here). Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Question: > How can I dedicate one CPU for Dom0 only and also how can I remove that CPU > so it is not available to the other VM''s? Is there a configuration file > somewhere? I need top network IO performance for my VM''s, since each one is > a VOIP softswitch. > >You need to do this manually by setting the other domUs config files (or issuing xm commands at runtime) so that they do not run on the CPU that dom0 is running on. Cheers, Mark> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Mark Williamson > Sent: Wednesday, June 18, 2008 6:07 PM > To: xen-users@lists.xensource.com > Cc: deshantm@gmail.com; Stefan de Konink > Subject: Re: [Xen-users] SMP enabled Dom0 or not? > > > > I wonder if the performance of a Xen machine can be increased by > > > disabling SMP in the Linux kernel by default, basically having one > > > of the 8 processors tied to dom0. > > Devoting a (logical) processor to dom0 can improve IO performance for > guests, > it''s true. Note that even just dedicating a hyperthread (if you have them) > can improve things. > > > > In my scenario I use NFS or iSCSI as file backend. Looking at NFS > > > there will be a lot of tapdrives, while in the iSCSI scenario there > > > is fewer overhead in userspace processes. > > > > > > Could anyone give me a hint on the performance increase or decrease > > > using SMP vs Uniprocessor? > > FYI, XenLinux will automatically optimise itself for UP or SMP operation > without you having to recompile. The spinlock operations are patched out > if > > the kernel is booted in UP, or patched in for SMP. Whatsmore, I think this > is even done at runtime, so a kernel can SMP-ify or de-SMP-ify itself on > the > > fly (!). I think this might have gone into mainline Linux a while back, > actually. > > I wouldn''t be surprised if it''s actually not possible to run a pure UP dom0 > on > an SMP system but I don''t know for sure. > > > > In the original "Xen and the Art of Virtualization" they actually > > > disabled SMP and had better IO performance. I don''t know if this is > > > still true. > > > > > > Take a look at: > > > http://research.microsoft.com/~tharris/papers/2003-sosp.pdf > > > www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > > > <http://www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > > >> > > > > Now I guess in 2003 there was no concept like tapdisk yet. I''ll see if I > > can get a clean benchmark. Of 32 VMs doing the same task, SMP vs non-SMP. > > Back then dom0 didn''t even handle IO for the domains, it was all done in > Xen ;-) Things have moved on quite a long way since then! > > Worth noting that if the processes in dom0 are just doing IO then they''ll > be > > blocked most of the time, so the performance may depend less on the number > of > CPUs available to dom0 and more on the regularity of scheduling (i.e. > deploying a dom0 with dedicated PCPUs is probably the ultimate here). > > Cheers, > Mark-- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Dear Mark The million dollar question: how? Where is the documentation that can help me do that? I mean, tie-up one core Dom0 and simultaneously remove that core from the other DumUs -----Original Message----- From: M.A. Williamson [mailto:maw48@hermes.cam.ac.uk] On Behalf Of Mark Williamson Sent: Thursday, June 19, 2008 9:57 AM To: Venefax Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] SMP enabled Dom0 or not?> Question: > How can I dedicate one CPU for Dom0 only and also how can I remove thatCPU> so it is not available to the other VM''s? Is there a configuration file > somewhere? I need top network IO performance for my VM''s, since each oneis> a VOIP softswitch. > >You need to do this manually by setting the other domUs config files (or issuing xm commands at runtime) so that they do not run on the CPU that dom0 is running on. Cheers, Mark> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of MarkWilliamson> Sent: Wednesday, June 18, 2008 6:07 PM > To: xen-users@lists.xensource.com > Cc: deshantm@gmail.com; Stefan de Konink > Subject: Re: [Xen-users] SMP enabled Dom0 or not? > > > > I wonder if the performance of a Xen machine can be increased by > > > disabling SMP in the Linux kernel by default, basically having one > > > of the 8 processors tied to dom0. > > Devoting a (logical) processor to dom0 can improve IO performance for > guests, > it''s true. Note that even just dedicating a hyperthread (if you havethem)> can improve things. > > > > In my scenario I use NFS or iSCSI as file backend. Looking at NFS > > > there will be a lot of tapdrives, while in the iSCSI scenariothere> > > is fewer overhead in userspace processes. > > > > > > Could anyone give me a hint on the performance increase ordecrease> > > using SMP vs Uniprocessor? > > FYI, XenLinux will automatically optimise itself for UP or SMP operation > without you having to recompile. The spinlock operations are patched out > if > > the kernel is booted in UP, or patched in for SMP. Whatsmore, I thinkthis> is even done at runtime, so a kernel can SMP-ify or de-SMP-ify itself on > the > > fly (!). I think this might have gone into mainline Linux a while back, > actually. > > I wouldn''t be surprised if it''s actually not possible to run a pure UPdom0> on > an SMP system but I don''t know for sure. > > > > In the original "Xen and the Art of Virtualization" they actually > > > disabled SMP and had better IO performance. I don''t know if this is > > > still true. > > > > > > Take a look at: > > > http://research.microsoft.com/~tharris/papers/2003-sosp.pdf > > > www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > > ><http://www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf> > >> > > > > Now I guess in 2003 there was no concept like tapdisk yet. I''ll see if I > > can get a clean benchmark. Of 32 VMs doing the same task, SMP vsnon-SMP.> > Back then dom0 didn''t even handle IO for the domains, it was all done in > Xen ;-) Things have moved on quite a long way since then! > > Worth noting that if the processes in dom0 are just doing IO then they''ll > be > > blocked most of the time, so the performance may depend less on the number > of > CPUs available to dom0 and more on the regularity of scheduling (i.e. > deploying a dom0 with dedicated PCPUs is probably the ultimate here). > > Cheers, > Mark-- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Venefax wrote:> Dear Mark > The million dollar question: how? Where is the documentation that can help > me do that? I mean, tie-up one core Dom0 and simultaneously remove that core > from the other DumUs > > >There is no way to do it simultaneously that I know of. Here''s what I did. - make sure you disabled hyperthreading (if using Intel) from BIOS. hyperthread is not an actual independent core - in /etc/xen/xend-config.sxp : (dom0-cpus 1) this will limit domain0 to use only one cpu - in /etc/rc.local : xm vcpu-pin 0 0 0 this will make sure dom0''s cpu is on physical cpu 0 (feel free to change it to whatever you want). I think "xm vcpu-pin" is broken on RHEL 5.2''s Xen though, so if you use RHEL 5.2 you might want to upgrade manually to xen 3.2. - in every domU''s config file : cpus = "^0" this will prevent domU from using cpu 0. - reboot the server If you can''t reboot the server, you have to rearrange dom0 and domU''s cpu manually using "xm vcpu-set" and "xm vcpu-pin" Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
The instructions have a small problem: I use SUSE and there is no /etc/rc.local. ¿any idea? Also, I changed the configuration files for the DomUs, but is there any way to force them to reread the configuration without restarting them? How about changing the dom0? It is very hard for me to reboot the box. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Fajar A. Nugraha Sent: Thursday, June 19, 2008 9:00 PM To: xen-users@lists.xensource.com Subject: Re: [Xen-users] SMP enabled Dom0 or not? Venefax wrote:> Dear Mark > The million dollar question: how? Where is the documentation that can > help me do that? I mean, tie-up one core Dom0 and simultaneously > remove that core from the other DumUs > > >There is no way to do it simultaneously that I know of. Here''s what I did. - make sure you disabled hyperthreading (if using Intel) from BIOS. hyperthread is not an actual independent core - in /etc/xen/xend-config.sxp : (dom0-cpus 1) this will limit domain0 to use only one cpu - in /etc/rc.local : xm vcpu-pin 0 0 0 this will make sure dom0''s cpu is on physical cpu 0 (feel free to change it to whatever you want). I think "xm vcpu-pin" is broken on RHEL 5.2''s Xen though, so if you use RHEL 5.2 you might want to upgrade manually to xen 3.2. - in every domU''s config file : cpus = "^0" this will prevent domU from using cpu 0. - reboot the server If you can''t reboot the server, you have to rearrange dom0 and domU''s cpu manually using "xm vcpu-set" and "xm vcpu-pin" Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Friday June 20 2008 01:21:43 pm Venefax wrote:> The instructions have a small problem: I use SUSE and there is no > /etc/rc.local. ¿any idea?Interestingly enough, there is /etc/init.d/before.local and after.local. You have to read /etc/init.d/rc to discover them. Basically, after.local runs after all the services have been started upon entering a runlevel. before.local runs before the runlevel is entered. If you anticipate changing runlevels often, you may want to restrict the actions in after.local to a particular runlevel.> Also, I changed the configuration files for the DomUs, but is there any way > to force them to reread the configuration without restarting them?In general, nope. But you can manually use ''xm vcpu-pin domid ...'', and your domu will only use those vcpus till the next restart. (In my limited testing, linux reacts better to pinning than Windows.)> How > about changing the dom0? It is very hard for me to reboot the box.But you can restart the domus, so they can re-read their configs. You can do the vcpu pinning manually at first. Changing configs, and after.local, is just so it stays that way when you do reboot dom0/restart domu. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
jim burns wrote:> On Friday June 20 2008 01:21:43 pm Venefax wrote: > >> The instructions have a small problem: I use SUSE and there is no >> /etc/rc.local. ¿any idea? >> > > Interestingly enough, there is /etc/init.d/before.local and after.local. You > have to read /etc/init.d/rc to discover them. Basically, after.local runs > after all the services have been started upon entering a runlevel. > before.local runs before the runlevel is entered. If you anticipate changing > runlevels often, you may want to restrict the actions in after.local to a > particular runlevel. >Different distributions follow different conventions. I personally *loathe* putting thing in rc.local, because then you can''t select to run it all by itself, and it''s not easy to enable or disable it without directly editing a default system file. I prefer using /etc/init.d and scripts there, with chkconfig if available, to control when something runs and in what order relative to other tools.>> Also, I changed the configuration files for the DomUs, but is there any way >> to force them to reread the configuration without restarting them? >> > > In general, nope. But you can manually use ''xm vcpu-pin domid ...'', and your > domu will only use those vcpus till the next restart. (In my limited testing, > linux reacts better to pinning than Windows.) >>> How >> about changing the dom0? It is very hard for me to reboot the box. >> > > But you can restart the domus, so they can re-read their configs. You can do > the vcpu pinning manually at first. Changing configs, and after.local, is > just so it stays that way when you do reboot dom0/restart domu. >If you use init scripts in /etc/init.d, it''s often possible to simply bring a machine down to rerun that script, or to change the runlevel with the ''telinit'' script to get them to restart as needed. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sunday June 22 2008 06:22:37 am Nico Kadel-Garcia wrote:> Different distributions follow different conventions. I personally > *loathe* putting thing in rc.local, because then you can''t select to run > it all by itself, and it''s not easy to enable or disable it without > directly editing a default system file. I prefer using /etc/init.d and > scripts there, with chkconfig if available, to control when something > runs and in what order relative to other tools.Agreed. If I absolutely have to control the order, I''ll modify a template for a service, complete with the info chkconfig needs. (This varies by distro, but for Redhat type distros, a comment near the top does the trick, eg: ''# chkconfig: 2345 20 80'', which is what I had to use to get antivir to play nice. Chkconfig chokes unless all the comments in the block between ''# chkconfig:'' to ''# description:'' are present.) However, rc.local runs as /etc/rc5.d/S99local (substitute your runlevel) with no corresponding K (kill or stop) link, and as such is meant to run after all other services have started. W/o a K link, you don''t *have* to be rigorous about writing a full init.d style script, but nothing is stopping you from doing so. I use it to correct flaws in the default order of starting services. (I wouldn''t want to modify the default scripts themselves, as they can get overwritten by an update.) Eg - on Redhat style distros, my virus checker, antivir, needs a kernel module, dazuko, that can''t be loaded at the same time as the ''capability'' module, which is required for proper service startup. Also, wireless cards won''t init properly w/wpa2 encryption, since at ''network'' script run time, ''wpa_supplicant'' is not running. YMMV. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I already rebooted and reconfigure the system, but how can I measure the network bandwidth that is being generated by each domu and all combined? Is there way? I don''t mean to monitor or log it, just to know every minute how many KB are being send and received by the physical eth0 and what domu is actually generating it or receiving it? By the way, the only way to restrict dom0 to one cpu is to change, in suse, /boot/grub/menu.lst, and add: module /boot/vmlinuz-2.6.16.60-0.23-xen root=/dev/disk/by-id/scsi-36001e4f0375f87000fb9db4a0d3fd024-part3 vga=0x317 resume=/dev/sda2 splash=silent showopts maxcpus=1 According to Novell Support, the option suggested (/etc/xen/xend-config.sxp : (dom0-cpus 1)) does not work. Is this accurate? -----Original Message----- From: M.A. Williamson [mailto:maw48@hermes.cam.ac.uk] On Behalf Of Mark Williamson Sent: Thursday, June 19, 2008 9:57 AM To: Venefax Cc: xen-users@lists.xensource.com Subject: Re: [Xen-users] SMP enabled Dom0 or not?> Question: > How can I dedicate one CPU for Dom0 only and also how can I remove thatCPU> so it is not available to the other VM''s? Is there a configuration file > somewhere? I need top network IO performance for my VM''s, since each oneis> a VOIP softswitch. > >You need to do this manually by setting the other domUs config files (or issuing xm commands at runtime) so that they do not run on the CPU that dom0 is running on. Cheers, Mark> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of MarkWilliamson> Sent: Wednesday, June 18, 2008 6:07 PM > To: xen-users@lists.xensource.com > Cc: deshantm@gmail.com; Stefan de Konink > Subject: Re: [Xen-users] SMP enabled Dom0 or not? > > > > I wonder if the performance of a Xen machine can be increased by > > > disabling SMP in the Linux kernel by default, basically having one > > > of the 8 processors tied to dom0. > > Devoting a (logical) processor to dom0 can improve IO performance for > guests, > it''s true. Note that even just dedicating a hyperthread (if you havethem)> can improve things. > > > > In my scenario I use NFS or iSCSI as file backend. Looking at NFS > > > there will be a lot of tapdrives, while in the iSCSI scenariothere> > > is fewer overhead in userspace processes. > > > > > > Could anyone give me a hint on the performance increase ordecrease> > > using SMP vs Uniprocessor? > > FYI, XenLinux will automatically optimise itself for UP or SMP operation > without you having to recompile. The spinlock operations are patched out > if > > the kernel is booted in UP, or patched in for SMP. Whatsmore, I thinkthis> is even done at runtime, so a kernel can SMP-ify or de-SMP-ify itself on > the > > fly (!). I think this might have gone into mainline Linux a while back, > actually. > > I wouldn''t be surprised if it''s actually not possible to run a pure UPdom0> on > an SMP system but I don''t know for sure. > > > > In the original "Xen and the Art of Virtualization" they actually > > > disabled SMP and had better IO performance. I don''t know if this is > > > still true. > > > > > > Take a look at: > > > http://research.microsoft.com/~tharris/papers/2003-sosp.pdf > > > www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf > > ><http://www.clarkson.edu/class/cs644/xen/files/repeatedxen-usenix04.pdf> > >> > > > > Now I guess in 2003 there was no concept like tapdisk yet. I''ll see if I > > can get a clean benchmark. Of 32 VMs doing the same task, SMP vsnon-SMP.> > Back then dom0 didn''t even handle IO for the domains, it was all done in > Xen ;-) Things have moved on quite a long way since then! > > Worth noting that if the processes in dom0 are just doing IO then they''ll > be > > blocked most of the time, so the performance may depend less on the number > of > CPUs available to dom0 and more on the regularity of scheduling (i.e. > deploying a dom0 with dedicated PCPUs is probably the ultimate here). > > Cheers, > Mark-- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users