Hello ??? For the past months I've been testing upgrading my Xen hosts to CentOS 7 and I face an issue for which I need your help to solve. ??? The testing machines are IBM blades, model H21 and H21XM. Initial tests were performed on the H21 with 16 GB RAM; during the last 6=7 weeks I've been using the H21XM with 64 GB. In all cases the guests were fully updated CentOS 7 -- initially 7.6 ( most recent at the time of the initial tests ), and respectively 7.8 for the tests performed during the last 2 months.? As host I used initially CentOS 6 with latest kernel available in the centos virt repo at the time of the tests and CentOS 7 with the latest kernel as well. As xen versions I tested 4.8 and 4.12 ( xl info included below ). The storage for the last tests is a Crucial MX500 but results were similar when using traditional HDD. ??? My problem, in short, is that the guests are extremely slow. For instance , in the most recent tests, a yum install kernel takes cca 1 min on the host and 12-15 (!!!) minutes in the guest, all time being spent in dracut regenerating the initramfs images. I've done rough tests with the storage? ( via dd if=/dev/zero of=a_test_file size bs=10M count=1000 ) and the speed was comparable between the hosts and the guests. The version of the kernel in use inside the guest also did not seem to make any difference . OTOH, sysbench ( https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark report for the guests a speed which is between 10% and 50% of the host. Quite obviously, changing the elevator had no influence either. ??? Here is the info which I think that should be relevant for the software versions in use. Feel free to ask for any additional info. ?[root at t7 ~]# xl info host?????????????????? : t7 release??????????????? : 4.9.215-36.el7.x86_64 version??????????????? : #1 SMP Mon Mar 2 11:42:52 UTC 2020 machine??????????????? : x86_64 nr_cpus??????????????? : 8 max_cpu_id???????????? : 7 nr_nodes?????????????? : 1 cores_per_socket?????? : 4 threads_per_core?????? : 1 cpu_mhz??????????????? : 3000.122 hw_caps??????????????? : bfebfbff:000ce3bd:20100800:00000001:00000000:00000000:00000000:00000000 virt_caps????????????? : pv hvm total_memory?????????? : 57343 free_memory??????????? : 53620 sharing_freed_memory?? : 0 sharing_used_memory??? : 0 outstanding_claims???? : 0 free_cpus????????????? : 0 xen_major????????????? : 4 xen_minor????????????? : 12 xen_extra????????????? : .2.39.g3536f8dc xen_version??????????? : 4.12.2.39.g3536f8dc xen_caps?????????????? : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler????????? : credit2 xen_pagesize?????????? : 4096 platform_params??????? : virt_start=0xffff800000000000 xen_changeset????????? : xen_commandline??????? : placeholder dom0_mem=1024M,max:1024M cpuinfo com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all ucode=-1 cc_compiler??????????? : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) cc_compile_by????????? : mockbuild cc_compile_domain????? : centos.org cc_compile_date??????? : Tue Apr 14 14:22:04 UTC 2020 build_id?????????????? : 24148a191438467f26a9e16089205544a428f661 xend_config_format???? : 4 [root at t5 ~]# xl info host?????????????????? : t5 release??????????????? : 4.9.215-36.el6.x86_64 version??????????????? : #1 SMP Mon Mar 2 10:30:40 UTC 2020 machine??????????????? : x86_64 nr_cpus??????????????? : 8 max_cpu_id???????????? : 7 nr_nodes?????????????? : 1 cores_per_socket?????? : 4 threads_per_core?????? : 1 cpu_mhz??????????????? : 2000 hw_caps??????????????? : b7ebfbff:0004e33d:20100800:00000001:00000000:00000000:00000000:00000000 virt_caps????????????? : hvm total_memory?????????? : 12287 free_memory??????????? : 6955 sharing_freed_memory?? : 0 sharing_used_memory??? : 0 outstanding_claims???? : 0 free_cpus????????????? : 0 xen_major????????????? : 4 xen_minor????????????? : 8 xen_extra????????????? : .5.86.g8db85532 xen_version??????????? : 4.8.5.86.g8db85532 xen_caps?????????????? : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler????????? : credit xen_pagesize?????????? : 4096 platform_params??????? : virt_start=0xffff800000000000 xen_changeset????????? : xen_commandline??????? : dom0_mem=1024M,max:1024M cpuinfo com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all cc_compiler??????????? : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23) cc_compile_by????????? : mockbuild cc_compile_domain????? : centos.org cc_compile_date??????? : Thu Dec 12 14:34:48 UTC 2019 build_id?????????????? : da34ae5b90c82137dcbc466cd66322381bc6fd21 xend_config_format???? : 4 /Note:/ with all other kernels and xen versions that were published for C6 during the last year,? the performance was the same, i.e. slow The test VM is exactly the same, copied among servers: [root at t7 ~]# cat? /etc/xen/test7_1 builder = "hvm" xen_platform_pci=1 name = "Test7" memory = 2048 maxmem = 4096 vcpus = 2 vif = [ "mac=00:14:5e:d9:df:50,bridge=xenbr0,model=e1000" ] disk = [ "file:/var/lib/xen/images/test7_1,xvda,w" ] sdl = 0 vnc = 1 bootloader = 'xenpvnetboot' #bootloader_args = ['--location', 'http://internal.x.y/mrepo/centos7-x86_64/disc1/'] on_poweroff = 'destroy' on_reboot?? = 'restart' on_crash??? = 'restart' #boot="nd" boot="d" pae=1 acpi=1 apic=1 tsc_mode=0 /Notes:/ - the lines past "boot" in the config do not make any difference either, they were added during the last week's tests. - I've tested with 1, 2, 4 and 8 VCPUs . There is no diff for the real life apps. ??? Frankly in this moment I have no idea what else to change or test so .. please help. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20200614/a531d404/attachment-0002.html>
On 15-06-2020 4:49, Manuel Wolfshant wrote: [...]> The testing machines are IBM blades, model H21 and H21XM. Initial > tests were performed on the H21 with 16 GB RAM; during the last 6=7 > weeks I've been using the H21XM with 64 GB. In all cases the guests > were fully updated CentOS 7 -- initially 7.6 ( most recent at the time > of the initial tests ), and respectively 7.8 for the tests performed > during the last 2 months. As host I used initially CentOS 6 with > latest kernel available in the centos virt repo at the time of the > tests and CentOS 7 with the latest kernel as well. As xen versions I > tested 4.8 and 4.12 ( xl info included below ). The storage for the > last tests is a Crucial MX500 but results were similar when using > traditional HDD. > My problem, in short, is that the guests are extremely slow. For > instance , in the most recent tests, a yum install kernel takes cca 1 > min on the host and 12-15 (!!!) minutes in the guest, all time being > spent in dracut regenerating the initramfs images. I've done rough > tests with the storage ( via dd if=/dev/zero of=a_test_file size > bs=10M count=1000 ) and the speed was comparable between the hosts and > the guests. The version of the kernel in use inside the guest also did > not seem to make any difference . OTOH, sysbench ( > https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark > report for the guests a speed which is between 10% and 50% of the > host. Quite obviously, changing the elevator had no influence either. > > Here is the info which I think that should be relevant for the > software versions in use. Feel free to ask for any additional info. > > [root at t7 ~]# xl info > host : t7 > release : 4.9.215-36.el7.x86_64 > version : #1 SMP Mon Mar 2 11:42:52 UTC 2020 > machine : x86_64 > nr_cpus : 8 > max_cpu_id : 7 > nr_nodes : 1 > cores_per_socket : 4 > threads_per_core : 1 > cpu_mhz : 3000.122 > hw_caps : > bfebfbff:000ce3bd:20100800:00000001:00000000:00000000:00000000:00000000 > virt_caps : pv hvm > total_memory : 57343 > free_memory : 53620 > sharing_freed_memory : 0 > sharing_used_memory : 0 > outstanding_claims : 0 > free_cpus : 0 > xen_major : 4 > xen_minor : 12 > xen_extra : .2.39.g3536f8dc > xen_version : 4.12.2.39.g3536f8dc > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit2 > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : > xen_commandline : placeholder dom0_mem=1024M,max:1024M cpuinfo > com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all ucode=-1 > cc_compiler : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) > cc_compile_by : mockbuild > cc_compile_domain : centos.org > cc_compile_date : Tue Apr 14 14:22:04 UTC 2020 > build_id : 24148a191438467f26a9e16089205544a428f661 > xend_config_format : 4 > > [root at t5 ~]# xl info > host : t5 > release : 4.9.215-36.el6.x86_64 > version : #1 SMP Mon Mar 2 10:30:40 UTC 2020 > machine : x86_64 > nr_cpus : 8 > max_cpu_id : 7 > nr_nodes : 1 > cores_per_socket : 4 > threads_per_core : 1 > cpu_mhz : 2000 > hw_caps : > b7ebfbff:0004e33d:20100800:00000001:00000000:00000000:00000000:00000000 > virt_caps : hvm > total_memory : 12287 > free_memory : 6955 > sharing_freed_memory : 0 > sharing_used_memory : 0 > outstanding_claims : 0 > free_cpus : 0 > xen_major : 4 > xen_minor : 8 > xen_extra : .5.86.g8db85532 > xen_version : 4.8.5.86.g8db85532 > xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 > hvm-3.0-x86_32p hvm-3.0-x86_64 > xen_scheduler : credit > xen_pagesize : 4096 > platform_params : virt_start=0xffff800000000000 > xen_changeset : > xen_commandline : dom0_mem=1024M,max:1024M cpuinfo > com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all > cc_compiler : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23) > cc_compile_by : mockbuild > cc_compile_domain : centos.org > cc_compile_date : Thu Dec 12 14:34:48 UTC 2019 > build_id : da34ae5b90c82137dcbc466cd66322381bc6fd21 > xend_config_format : 4 > > _Note:_ with all other kernels and xen versions that were published > for C6 during the last year, the performance was the same, i.e. slow > > The test VM is exactly the same, copied among servers: > > [root at t7 ~]# cat /etc/xen/test7_1 > builder = "hvm" > xen_platform_pci=1 > name = "Test7" > memory = 2048 > maxmem = 4096 > vcpus = 2 > vif = [ "mac=00:14:5e:d9:df:50,bridge=xenbr0,model=e1000" ] > disk = [ "file:/var/lib/xen/images/test7_1,xvda,w" [1] ] > sdl = 0 > vnc = 1 > bootloader = 'xenpvnetboot' > #bootloader_args = ['--location', > 'http://internal.x.y/mrepo/centos7-x86_64/disc1/'] > on_poweroff = 'destroy' > on_reboot = 'restart' > on_crash = 'restart' > #boot="nd" > boot="d" > pae=1 > acpi=1 > apic=1 > tsc_mode=0 > > _Notes:_ > > - the lines past "boot" in the config do not make any difference > either, they were added during the last week's tests. > > - I've tested with 1, 2, 4 and 8 VCPUs . There is no diff for the real > life apps.Wolfy, to begin with, can you try the kernel-xen package from https://xen.crc.id.au/support/guides/install/ with the CPU vulnerability mitigations turned off for both dom0 and domU? -- Adi Pircalabu
Stephen John Smoogen
2020-Jun-15 11:46 UTC
[CentOS-virt] very low performance of Xen guests
On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant <wolfy at nobugconsulting.ro> wrote:> Hello > > > For the past months I've been testing upgrading my Xen hosts to CentOS > 7 and I face an issue for which I need your help to solve. > > The testing machines are IBM blades, model H21 and H21XM. Initial > tests were performed on the H21 with 16 GB RAM; during the last 6=7 weeks > I've been using the H21XM with 64 GB. In all cases the guests were fully > updated CentOS 7 -- initially 7.6 ( most recent at the time of the initial > tests ), and respectively 7.8 for the tests performed during the last 2 > months. As host I used initially CentOS 6 with latest kernel available in > the centos virt repo at the time of the tests and CentOS 7 with the latest > kernel as well. As xen versions I tested 4.8 and 4.12 ( xl info included > below ). The storage for the last tests is a Crucial MX500 but results were > similar when using traditional HDD. > > My problem, in short, is that the guests are extremely slow. For > instance , in the most recent tests, a yum install kernel takes cca 1 min > on the host and 12-15 (!!!) minutes in the guest, all time being spent in > dracut regenerating the initramfs images. I've done rough tests with the > storage ( via dd if=/dev/zero of=a_test_file size bs=10M count=1000 ) and > the speed was comparable between the hosts and the guests. The version of > the kernel in use inside the guest also did not seem to make any difference > . OTOH, sysbench ( https://github.com/akopytov/sysbench/ ) as well as > p7zip benchmark report for the guests a speed which is between 10% and 50% > of the host. Quite obviously, changing the elevator had no influence > either. > > Here is the info which I think that should be relevant for the > software versions in use. Feel free to ask for any additional info. >Is there a way to boot up a PV guest versus an HVM? I could not find a H21XM but found an HS21XM on the iBM site and that seemed to be a 4 core 8 thread cpu which looks 'old' enough that the Spectre/etc fixes to improve performance after the initial hit were not done. (Basically I was told that if the CPU was older than 2012, just turn off hyperthreading altogether to try and get back some performance.. but don't expect much). As such I would also try turning off HT on the CPU to see if that improves anything. -- Stephen J Smoogen. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20200615/a66979a6/attachment-0002.html>
On 6/15/20 2:46 PM, Stephen John Smoogen wrote:> > > On Sun, 14 Jun 2020 at 14:49, Manuel Wolfshant > <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote: > > Hello > > > ??? For the past months I've been testing upgrading my Xen hosts > to CentOS 7 and I face an issue for which I need your help to solve. > > ??? The testing machines are IBM blades, model H21 and H21XM. > Initial tests were performed on the H21 with 16 GB RAM; during the > last 6=7 weeks I've been using the H21XM with 64 GB. In all cases > the guests were fully updated CentOS 7 -- initially 7.6 ( most > recent at the time of the initial tests ), and respectively 7.8 > for the tests performed during the last 2 months.? As host I used > initially CentOS 6 with latest kernel available in the centos virt > repo at the time of the tests and CentOS 7 with the latest kernel > as well. As xen versions I tested 4.8 and 4.12 ( xl info included > below ). The storage for the last tests is a Crucial MX500 but > results were similar when using traditional HDD. > > ??? My problem, in short, is that the guests are extremely slow. > For instance , in the most recent tests, a yum install kernel > takes cca 1 min on the host and 12-15 (!!!) minutes in the guest, > all time being spent in dracut regenerating the initramfs images. > I've done rough tests with the storage? ( via dd if=/dev/zero > of=a_test_file size bs=10M count=1000 ) and the speed was > comparable between the hosts and the guests. The version of the > kernel in use inside the guest also did not seem to make any > difference . OTOH, sysbench ( > https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark > report for the guests a speed which is between 10% and 50% of the > host. Quite obviously, changing the elevator had no influence either. > > ??? Here is the info which I think that should be relevant for the > software versions in use. Feel free to ask for any additional info. > > > Is there a way to boot up a PV guest versus an HVM?If I understood the docs correctly, newer xen does only PVHVM ( xen_platform_pci=1 activates that ) and HVM. But they say it's better than PV. And I did verify, PVHVM is indeed enabled and active> I could not find a H21XM but found an HS21XM on the iBMMy bad. The blades are indeed HS21 (Type 8853) and HS21 XM (Type 7995). The XM blades have 2*Xeon E5450 at 3GHz / 12GB L1 cache processors. The options I can fiddle with are https://imgur.com/a/DonXe5P AFAICS the setttings are reasonable but please do let me know if there is anything there that should not be as it is> site and that seemed to be a 4 core 8 thread cpu which looks 'old' > enough that the Spectre/etc fixes to improve performance after the > initial hit were not done. (Basically I was told that if the CPU was > older than 2012, just turn off hyperthreading altogether to try and > get back some performance.. but don't expect much).I can live with that. My problem is that DomU are much much slower that Dom0 so it seems xen virtualization affects ( heavily ) the performance.> As such I would also try turning off HT on the CPU to see if that > improves anything.I got inspired by Adi's earlier suggestion and after reading https://access.redhat.com/articles/3311301 I've tried today all variants of disabling the spectre mitigations. Whatever I do, immediately after a reboot, yum reinstall kernel does not take less than 5 minutes :( It goes down to 2 min if I repeat the operation afterwards so I guess some caching kicks in. I will try later today the kernels from elrepo and maybe even xen.crc.id.au ( I kind of hate the "disable selinux" recommendation from the install page so I postponed it in the hope of other solution ). -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20200615/37b3f4ca/attachment.html>
On 6/15/20 5:40 PM, Stephen John Smoogen wrote:> > > On Mon, 15 Jun 2020 at 09:42, Manuel Wolfshant > <wolfy at nobugconsulting.ro <mailto:wolfy at nobugconsulting.ro>> wrote: > > On 6/15/20 2:46 PM, Stephen John Smoogen wrote: > > I got inspired by Adi's earlier suggestion and after reading > https://access.redhat.com/articles/3311301 I've tried today all > variants of disabling the spectre mitigations. Whatever I do, > immediately after a reboot, yum reinstall kernel does not take > less than 5 minutes :( It goes down to 2 min if I repeat the > operation afterwards so I guess some caching kicks in. I will try > later today the kernels from elrepo and maybe even xen.crc.id.au > <http://xen.crc.id.au> ( I kind of hate the "disable selinux" > recommendation from the install page so I postponed it in the hope > of other solution ). > > > > If you can do a full reinstall, could you see if a KVM host/guest > combo has the same problem? That would at least point the finger more > firmly at VT, spectre or something else. > >I finally managed to install a fresh KVM host / guest pair on an identical blade ( HS21XM, 64 GB ram, 2*E5450@ 3.00GHz ). Here are the results I see: 1. KVM host, stock instalation and fully updated, kernel 3.10.0-1127.10.1 #cd /sys/kernel/debug/x86/ #cat ibrs_enabled pti_enabled retp_enabled 0 1 1 #time yum -y reinstall kernel-3.10.0-1127.el7.x86_64 real??? 0m50.026s user??? 0m32.872s sys???? 0m23.312s 2. KVM guest on the same machine (virt-install --name guest1-rhel7 --memory 2048 --vcpus 2? --disk size=20 --network=bridge:br0 --pxe --os-variant rhel7 <=== copy/paste from https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_installation_overview-creating_guests_with_virt_install ), stock installation and fully updated with absolutely no change towards the defaults including same ibrs_enabled pti_enabled retp_enabled as the host, , kernel 3.10.0-1127.10.1 #time yum -y reinstall kernel-3.10.0-1127.el7.x86_64 real??? 2m39.644s user??? 1m54.662s sys???? 1m32.496s 3. Xen Domu,? 3.10.0-1127.8.2.el7.x86_6 ( but results are consistent across all kernels ) # cat ibrs_enabled pti_enabled retp_enabled 0 0 0 # time yum -y reinstall kernel-3.10.0-1127.el7.x86_64 real??? 5m44.030s user??? 2m9.931s sys???? 4m7.771s 4. Dom0, 4.9.215-36.el7.x86_64, , xen 4.12 from centos' repo # time? yum -y reinstall kernel-3.10.0-1127.el7.x86_64 real??? 1m52.417s user??? 0m45.704s sys???? 1m32.167s -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20200618/fe5d83b9/attachment.html>