Radoslav Bodó
2023-Sep-13 18:12 UTC
[Pkg-xen-devel] Bug#1051862: server flooded with xen_mc_flush warnings with xen 4.17 + linux 6.1
Package: xen-system-amd64 Version: 4.17.1+2-gb773c48e36-1 Severity: important Hello, after upgrade from Bullseye to Bookworm one of our dom0's became unusable due to logs/system being continuously flooded with warnings from arch/x86/xen/multicalls.c:102 xen_mc_flush, and the system become unusable. The issue starts at some point where system services starts to come up, but nothing very special is on that box (dom0, nftables, fail2ban, prometheus-node-exporter, 3x domU). We have tried to disable all domU's and fail2ban as the name of the process would suggest, but issue is still present. We have tried also some other elaboration but none of them have helped so far: * the issue arise when xen 4.17 + linux >= 6.1 is booted * xen + bookworm-backports linux-image-6.4.0-0.deb12.2-amd64 have same isuue * without xen hypervisor, linux 6.1 runs just fine * systemrescue cd boot and xfs_repair rootfs did not helped * memtest seem to be fine running for hours As a workaround we have booted xen 4.17 + linux 5.10.0-25 (5.10.191-1) and the system is running fine as for last few months. Hardware: * Dell PowerEdge R750xs * 2x Intel Xeon Silver 4310 2.1G * 256GB RAM * PERC H755 Adapter, 12x 18TB HDDs Any help, advice or bug confirmation would be appreciated Best regards bodik (log also in attachment) ``` kernel: [ 99.762402] WARNING: CPU: 10 PID: 1301 at arch/x86/xen/multicalls.c:102 xen_mc_flush+0x196/0x220 kernel: [ 99.762598] Modules linked in: nvme_fabrics nvme_core bridge xen_acpi_processor xen_gntdev stp llc xen_evtchn xenfs xen_privcmd binfmt_misc intel_rapl_msr ext4 intel_rapl_common crc16 intel_uncore_frequency_common mbcache ipmi_ssif jbd2 nfit libnvdimm ghash_clmulni_intel sha512_ssse3 sha512_generic aesni_intel acpi_ipmi nft_ct crypto_simd cryptd mei_me mgag200 ipmi_si iTCO_wdt intel_pmc_bxt ipmi_devintf drm_shmem_helper dell_smbios nft_masq iTCO_vendor_support isst_if_mbox_pci drm_kms_helper isst_if_mmio dcdbas mei intel_vsec isst_if_common dell_wmi_descriptor wmi_bmof watchdog pcspkr intel_pch_thermal ipmi_msghandler i2c_algo_bit acpi_power_meter button nft_nat joydev evdev sg nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink drm fuse loop efi_pstore configfs ip_tables x_tables autofs4 xfs libcrc32c crc32c_generic hid_generic usbhid hid dm_mod sd_mod t10_pi crc64_rocksoft crc64 crc_t10dif crct10dif_generic ahci libahci xhci_pci libata xhci_hcd kernel: [ 99.762633] megaraid_sas tg3 crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel bnxt_en usbcore scsi_mod i2c_i801 libphy i2c_smbus usb_common scsi_common wmi kernel: [ 99.764765] CPU: 10 PID: 1301 Comm: python3 Tainted: G W 6.1.0-12-amd64 #1 Debian 6.1.52-1 kernel: [ 99.764989] Hardware name: Dell Inc. PowerEdge R750xs/0441XG, BIOS 1.8.2 09/14/2022 kernel: [ 99.765214] RIP: e030:xen_mc_flush+0x196/0x220 kernel: [ 99.765436] Code: e2 06 48 01 da 85 c0 0f 84 23 ff ff ff 48 8b 43 18 48 83 c3 40 48 c1 e8 3f 41 01 c5 48 39 d3 75 ec 45 85 ed 0f 84 06 ff ff ff <0f> 0b e8 e3 6e a0 00 41 8b 14 24 44 89 ee 48 c7 c7 c0 ea 33 82 89 kernel: [ 99.765910] RSP: e02b:ffffc900412ffc60 EFLAGS: 00010082 kernel: [ 99.766152] RAX: ffffffffffffffea RBX: ffff8888a1a9e300 RCX: 0000000000000000 kernel: [ 99.766403] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8888a1a9eb10 kernel: [ 99.766653] RBP: 0000000080000002 R08: 0000000000000000 R09: 0000000000744f8b kernel: [ 99.766902] R10: 0000000000007ff0 R11: 0000000000000018 R12: ffff8888a1a9e300 kernel: [ 99.767153] R13: 0000000000000001 R14: ffffea0005130000 R15: ffffea0005130000 kernel: [ 99.767409] FS: 00007f59b5ba62c0(0000) GS:ffff8888a1a80000(0000) knlGS:0000000000000000 kernel: [ 99.767664] CS: 10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: [ 99.767918] CR2: 00007f59b2200000 CR3: 0000000141bd0000 CR4: 0000000000050660 kernel: [ 99.768181] Call Trace: kernel: [ 99.768436] <TASK> kernel: [ 99.768691] ? __warn+0x7d/0xc0 kernel: [ 99.768947] ? xen_mc_flush+0x196/0x220 kernel: [ 99.769204] ? report_bug+0xe6/0x170 kernel: [ 99.769460] ? handle_bug+0x41/0x70 kernel: [ 99.769713] ? exc_invalid_op+0x13/0x60 kernel: [ 99.769967] ? asm_exc_invalid_op+0x16/0x20 kernel: [ 99.770223] ? xen_mc_flush+0x196/0x220 kernel: [ 99.770478] xen_mc_issue+0x6d/0x70 kernel: [ 99.770726] xen_set_pmd_hyper+0x54/0x90 kernel: [ 99.770965] do_set_pmd+0x188/0x2a0 kernel: [ 99.771200] filemap_map_pages+0x1a9/0x6e0 kernel: [ 99.771434] xfs_filemap_map_pages+0x41/0x60 [xfs] kernel: [ 99.771714] do_fault+0x1a4/0x410 kernel: [ 99.771947] __handle_mm_fault+0x660/0xfa0 kernel: [ 99.772182] handle_mm_fault+0xdb/0x2d0 kernel: [ 99.772414] do_user_addr_fault+0x19c/0x570 kernel: [ 99.772643] exc_page_fault+0x70/0x170 kernel: [ 99.772873] asm_exc_page_fault+0x22/0x30 kernel: [ 99.773102] RIP: 0033:0x7f59b502cbe2 kernel: [ 99.773329] Code: 4d 8d 87 80 01 00 00 48 89 d9 45 31 c9 48 85 ff 74 5c 66 0f 1f 44 00 00 49 8d b0 80 fe ff ff 31 d2 41 0f 18 08 f3 0f 6f 04 11 <f3> 0f 6f 1c 16 f3 0f 6f 24 16 66 0f ef c3 66 0f 70 c8 31 66 0f f4 kernel: [ 99.773806] RSP: 002b:00007ffc69923f70 EFLAGS: 00010246 kernel: [ 99.774040] RAX: 00007ffc69923fa0 RBX: 00007f59b5038000 RCX: 00007f59b5038000 kernel: [ 99.774276] RDX: 0000000000000000 RSI: 00007f59b21ffffa RDI: 0000000000000010 kernel: [ 99.774504] RBP: 00000000025b8f40 R08: 00007f59b220017a R09: 0000000000000000 kernel: [ 99.774727] R10: 0000000000077eca R11: 0000000000000400 R12: 00007f59b1a002fa kernel: [ 99.774947] R13: 00007ffc69923fe0 R14: 00007f59b5038080 R15: 00007f59b21ffffa kernel: [ 99.775161] </TASK> kernel: [ 99.775365] ---[ end trace 0000000000000000 ]--- kernel: [ 99.775567] 1 of 1 multicall(s) failed: cpu 10 kernel: [ 99.775763] call 1: op=1 arg=[ffff8888a1a9eb10] result=-22 ``` -------------- next part -------------- kernel: [ 99.762402] WARNING: CPU: 10 PID: 1301 at arch/x86/xen/multicalls.c:102 xen_mc_flush+0x196/0x220 kernel: [ 99.762598] Modules linked in: nvme_fabrics nvme_core bridge xen_acpi_processor xen_gntdev stp llc xen_evtchn xenfs xen_privcmd binfmt_misc intel_rapl_msr ext4 intel_rapl_common crc16 intel_uncore_frequency_common mbcache ipmi_ssif jbd2 nfit libnvdimm ghash_clmulni_intel sha512_ssse3 sha512_generic aesni_intel acpi_ipmi nft_ct crypto_simd cryptd mei_me mgag200 ipmi_si iTCO_wdt intel_pmc_bxt ipmi_devintf drm_shmem_helper dell_smbios nft_masq iTCO_vendor_support isst_if_mbox_pci drm_kms_helper isst_if_mmio dcdbas mei intel_vsec isst_if_common dell_wmi_descriptor wmi_bmof watchdog pcspkr intel_pch_thermal ipmi_msghandler i2c_algo_bit acpi_power_meter button nft_nat joydev evdev sg nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink drm fuse loop efi_pstore configfs ip_tables x_tables autofs4 xfs libcrc32c crc32c_generic hid_generic usbhid hid dm_mod sd_mod t10_pi crc64_rocksoft crc64 crc_t10dif crct10dif_generic ahci libahci xhci_pci libata xhci_hcd kernel: [ 99.762633] megaraid_sas tg3 crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel bnxt_en usbcore scsi_mod i2c_i801 libphy i2c_smbus usb_common scsi_common wmi kernel: [ 99.764765] CPU: 10 PID: 1301 Comm: python3 Tainted: G W 6.1.0-12-amd64 #1 Debian 6.1.52-1 kernel: [ 99.764989] Hardware name: Dell Inc. PowerEdge R750xs/0441XG, BIOS 1.8.2 09/14/2022 kernel: [ 99.765214] RIP: e030:xen_mc_flush+0x196/0x220 kernel: [ 99.765436] Code: e2 06 48 01 da 85 c0 0f 84 23 ff ff ff 48 8b 43 18 48 83 c3 40 48 c1 e8 3f 41 01 c5 48 39 d3 75 ec 45 85 ed 0f 84 06 ff ff ff <0f> 0b e8 e3 6e a0 00 41 8b 14 24 44 89 ee 48 c7 c7 c0 ea 33 82 89 kernel: [ 99.765910] RSP: e02b:ffffc900412ffc60 EFLAGS: 00010082 kernel: [ 99.766152] RAX: ffffffffffffffea RBX: ffff8888a1a9e300 RCX: 0000000000000000 kernel: [ 99.766403] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8888a1a9eb10 kernel: [ 99.766653] RBP: 0000000080000002 R08: 0000000000000000 R09: 0000000000744f8b kernel: [ 99.766902] R10: 0000000000007ff0 R11: 0000000000000018 R12: ffff8888a1a9e300 kernel: [ 99.767153] R13: 0000000000000001 R14: ffffea0005130000 R15: ffffea0005130000 kernel: [ 99.767409] FS: 00007f59b5ba62c0(0000) GS:ffff8888a1a80000(0000) knlGS:0000000000000000 kernel: [ 99.767664] CS: 10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: [ 99.767918] CR2: 00007f59b2200000 CR3: 0000000141bd0000 CR4: 0000000000050660 kernel: [ 99.768181] Call Trace: kernel: [ 99.768436] <TASK> kernel: [ 99.768691] ? __warn+0x7d/0xc0 kernel: [ 99.768947] ? xen_mc_flush+0x196/0x220 kernel: [ 99.769204] ? report_bug+0xe6/0x170 kernel: [ 99.769460] ? handle_bug+0x41/0x70 kernel: [ 99.769713] ? exc_invalid_op+0x13/0x60 kernel: [ 99.769967] ? asm_exc_invalid_op+0x16/0x20 kernel: [ 99.770223] ? xen_mc_flush+0x196/0x220 kernel: [ 99.770478] xen_mc_issue+0x6d/0x70 kernel: [ 99.770726] xen_set_pmd_hyper+0x54/0x90 kernel: [ 99.770965] do_set_pmd+0x188/0x2a0 kernel: [ 99.771200] filemap_map_pages+0x1a9/0x6e0 kernel: [ 99.771434] xfs_filemap_map_pages+0x41/0x60 [xfs] kernel: [ 99.771714] do_fault+0x1a4/0x410 kernel: [ 99.771947] __handle_mm_fault+0x660/0xfa0 kernel: [ 99.772182] handle_mm_fault+0xdb/0x2d0 kernel: [ 99.772414] do_user_addr_fault+0x19c/0x570 kernel: [ 99.772643] exc_page_fault+0x70/0x170 kernel: [ 99.772873] asm_exc_page_fault+0x22/0x30 kernel: [ 99.773102] RIP: 0033:0x7f59b502cbe2 kernel: [ 99.773329] Code: 4d 8d 87 80 01 00 00 48 89 d9 45 31 c9 48 85 ff 74 5c 66 0f 1f 44 00 00 49 8d b0 80 fe ff ff 31 d2 41 0f 18 08 f3 0f 6f 04 11 <f3> 0f 6f 1c 16 f3 0f 6f 24 16 66 0f ef c3 66 0f 70 c8 31 66 0f f4 kernel: [ 99.773806] RSP: 002b:00007ffc69923f70 EFLAGS: 00010246 kernel: [ 99.774040] RAX: 00007ffc69923fa0 RBX: 00007f59b5038000 RCX: 00007f59b5038000 kernel: [ 99.774276] RDX: 0000000000000000 RSI: 00007f59b21ffffa RDI: 0000000000000010 kernel: [ 99.774504] RBP: 00000000025b8f40 R08: 00007f59b220017a R09: 0000000000000000 kernel: [ 99.774727] R10: 0000000000077eca R11: 0000000000000400 R12: 00007f59b1a002fa kernel: [ 99.774947] R13: 00007ffc69923fe0 R14: 00007f59b5038080 R15: 00007f59b21ffffa kernel: [ 99.775161] </TASK> kernel: [ 99.775365] ---[ end trace 0000000000000000 ]--- kernel: [ 99.775567] 1 of 1 multicall(s) failed: cpu 10 kernel: [ 99.775763] call 1: op=1 arg=[ffff8888a1a9eb10] result=-22
Hans van Kranenburg
2023-Sep-13 21:38 UTC
[Pkg-xen-devel] Bug#1051862: (Debian) Bug#1051862: server flooded with xen_mc_flush warnings with xen 4.17 + linux 6.1
Hi Radoslav, Thanks for your report... Hi Juergen, Boris and xen-devel, At Debian, we got the report below. (Also at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1051862) This hardware, with only Xen and Dom0 running is hitting the failed multicall warning and logging in arch/x86/xen/multicalls.c. Can you help advise what we can do to further debug this issue? Since this looks like pretty low level Xen/hardware stuff, I'd rather ask upstream for directions first. If needed the Debian Xen Team can assist the end user with the debugging process. Thanks, More reply inline... On 9/13/23 20:12, Radoslav Bod? wrote:> Package: xen-system-amd64 > Version: 4.17.1+2-gb773c48e36-1 > Severity: important > > Hello, > > after upgrade from Bullseye to Bookworm one of our dom0's > became unusable due to logs/system being continuously flooded > with warnings from arch/x86/xen/multicalls.c:102 xen_mc_flush, and the > system become unusable. > > The issue starts at some point where system services starts to come up, > but nothing very special is on that box (dom0, nftables, fail2ban, > prometheus-node-exporter, 3x domU). We have tried to disable all domU's > and fail2ban as the name of the process would suggest, but issue is > still present. We have tried also some other elaboration but none of > them have helped so far: > > * the issue arise when xen 4.17 + linux >= 6.1 is booted > * xen + bookworm-backports linux-image-6.4.0-0.deb12.2-amd64 have same isuue > * without xen hypervisor, linux 6.1 runs just fine > * systemrescue cd boot and xfs_repair rootfs did not helped > * memtest seem to be fine running for hoursThanks for already trying out all these combinations.> As a workaround we have booted xen 4.17 + linux 5.10.0-25 (5.10.191-1) > and the system is running fine as for last few months. > > Hardware: > * Dell PowerEdge R750xs > * 2x Intel Xeon Silver 4310 2.1G > * 256GB RAM > * PERC H755 Adapter, 12x 18TB HDDsI have a few quick additional questions already: 1. For clarification.. From your text, I understand that only this one single server is showing the problem after the Debian version upgrade. Does this mean that this is the only server you have running with exactly this combination of hardware (and BIOS version, CPU microcode etc etc)? Or, is there another one with same hardware which does not show the problem? 2. Can you reply with the output of 'xl dmesg' when the problem happens? Or, if the system gets unusable too quick, do you have a serial console connection to capture the output? 3. To confirm... I understand that there are many of these messages. Since you pasted only one, does that mean that all of them look exactly the same, with "1 of 1 multicall(s) failed: cpu 10" "call 1: op=1 arg=[ffff8888a1a9eb10] result=-22"? Or are there variations? If so, can you reply with a few different ones? Since this very much looks like an issue of Xen related code where the Xen hypervisor, dom0 kernel and hardware has to work together correctly, (and not a Debian packaging problem) I'm already asking upstream for advice about what we should/could do next, instead of trying to make a guess myself. Thanks, Hans> Any help, advice or bug confirmation would be appreciated > > Best regards > bodik > > > (log also in attachment) > > ``` > kernel: [ 99.762402] WARNING: CPU: 10 PID: 1301 at > arch/x86/xen/multicalls.c:102 xen_mc_flush+0x196/0x220 > kernel: [ 99.762598] Modules linked in: nvme_fabrics nvme_core bridge > xen_acpi_processor xen_gntdev stp llc xen_evtchn xenfs xen_privcmd > binfmt_misc intel_rapl_msr ext4 intel_rapl_common crc16 > intel_uncore_frequency_common mbcache ipmi_ssif jbd2 nfit libnvdimm > ghash_clmulni_intel sha512_ssse3 sha512_generic aesni_intel acpi_ipmi > nft_ct crypto_simd cryptd mei_me mgag200 ipmi_si iTCO_wdt intel_pmc_bxt > ipmi_devintf drm_shmem_helper dell_smbios nft_masq iTCO_vendor_support > isst_if_mbox_pci drm_kms_helper isst_if_mmio dcdbas mei intel_vsec > isst_if_common dell_wmi_descriptor wmi_bmof watchdog pcspkr > intel_pch_thermal ipmi_msghandler i2c_algo_bit acpi_power_meter button > nft_nat joydev evdev sg nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 > nf_defrag_ipv4 nf_tables nfnetlink drm fuse loop efi_pstore configfs > ip_tables x_tables autofs4 xfs libcrc32c crc32c_generic hid_generic > usbhid hid dm_mod sd_mod t10_pi crc64_rocksoft crc64 crc_t10dif > crct10dif_generic ahci libahci xhci_pci libata xhci_hcd > kernel: [ 99.762633] megaraid_sas tg3 crct10dif_pclmul > crct10dif_common crc32_pclmul crc32c_intel bnxt_en usbcore scsi_mod > i2c_i801 libphy i2c_smbus usb_common scsi_common wmi > kernel: [ 99.764765] CPU: 10 PID: 1301 Comm: python3 Tainted: G > W 6.1.0-12-amd64 #1 Debian 6.1.52-1 > kernel: [ 99.764989] Hardware name: Dell Inc. PowerEdge R750xs/0441XG, > BIOS 1.8.2 09/14/2022 > kernel: [ 99.765214] RIP: e030:xen_mc_flush+0x196/0x220 > kernel: [ 99.765436] Code: e2 06 48 01 da 85 c0 0f 84 23 ff ff ff 48 > 8b 43 18 48 83 c3 40 48 c1 e8 3f 41 01 c5 48 39 d3 75 ec 45 85 ed 0f 84 > 06 ff ff ff <0f> 0b e8 e3 6e a0 00 41 8b 14 24 44 89 ee 48 c7 c7 c0 ea > 33 82 89 > kernel: [ 99.765910] RSP: e02b:ffffc900412ffc60 EFLAGS: 00010082 > kernel: [ 99.766152] RAX: ffffffffffffffea RBX: ffff8888a1a9e300 RCX: > 0000000000000000 > kernel: [ 99.766403] RDX: 0000000000000000 RSI: 0000000000000001 RDI: > ffff8888a1a9eb10 > kernel: [ 99.766653] RBP: 0000000080000002 R08: 0000000000000000 R09: > 0000000000744f8b > kernel: [ 99.766902] R10: 0000000000007ff0 R11: 0000000000000018 R12: > ffff8888a1a9e300 > kernel: [ 99.767153] R13: 0000000000000001 R14: ffffea0005130000 R15: > ffffea0005130000 > kernel: [ 99.767409] FS: 00007f59b5ba62c0(0000) > GS:ffff8888a1a80000(0000) knlGS:0000000000000000 > kernel: [ 99.767664] CS: 10000e030 DS: 0000 ES: 0000 CR0: > 0000000080050033 > kernel: [ 99.767918] CR2: 00007f59b2200000 CR3: 0000000141bd0000 CR4: > 0000000000050660 > kernel: [ 99.768181] Call Trace: > kernel: [ 99.768436] <TASK> > kernel: [ 99.768691] ? __warn+0x7d/0xc0 > kernel: [ 99.768947] ? xen_mc_flush+0x196/0x220 > kernel: [ 99.769204] ? report_bug+0xe6/0x170 > kernel: [ 99.769460] ? handle_bug+0x41/0x70 > kernel: [ 99.769713] ? exc_invalid_op+0x13/0x60 > kernel: [ 99.769967] ? asm_exc_invalid_op+0x16/0x20 > kernel: [ 99.770223] ? xen_mc_flush+0x196/0x220 > kernel: [ 99.770478] xen_mc_issue+0x6d/0x70 > kernel: [ 99.770726] xen_set_pmd_hyper+0x54/0x90 > kernel: [ 99.770965] do_set_pmd+0x188/0x2a0 > kernel: [ 99.771200] filemap_map_pages+0x1a9/0x6e0 > kernel: [ 99.771434] xfs_filemap_map_pages+0x41/0x60 [xfs] > kernel: [ 99.771714] do_fault+0x1a4/0x410 > kernel: [ 99.771947] __handle_mm_fault+0x660/0xfa0 > kernel: [ 99.772182] handle_mm_fault+0xdb/0x2d0 > kernel: [ 99.772414] do_user_addr_fault+0x19c/0x570 > kernel: [ 99.772643] exc_page_fault+0x70/0x170 > kernel: [ 99.772873] asm_exc_page_fault+0x22/0x30 > kernel: [ 99.773102] RIP: 0033:0x7f59b502cbe2 > kernel: [ 99.773329] Code: 4d 8d 87 80 01 00 00 48 89 d9 45 31 c9 48 > 85 ff 74 5c 66 0f 1f 44 00 00 49 8d b0 80 fe ff ff 31 d2 41 0f 18 08 f3 > 0f 6f 04 11 <f3> 0f 6f 1c 16 f3 0f 6f 24 16 66 0f ef c3 66 0f 70 c8 31 > 66 0f f4 > kernel: [ 99.773806] RSP: 002b:00007ffc69923f70 EFLAGS: 00010246 > kernel: [ 99.774040] RAX: 00007ffc69923fa0 RBX: 00007f59b5038000 RCX: > 00007f59b5038000 > kernel: [ 99.774276] RDX: 0000000000000000 RSI: 00007f59b21ffffa RDI: > 0000000000000010 > kernel: [ 99.774504] RBP: 00000000025b8f40 R08: 00007f59b220017a R09: > 0000000000000000 > kernel: [ 99.774727] R10: 0000000000077eca R11: 0000000000000400 R12: > 00007f59b1a002fa > kernel: [ 99.774947] R13: 00007ffc69923fe0 R14: 00007f59b5038080 R15: > 00007f59b21ffffa > kernel: [ 99.775161] </TASK> > kernel: [ 99.775365] ---[ end trace 0000000000000000 ]--- > kernel: [ 99.775567] 1 of 1 multicall(s) failed: cpu 10 > kernel: [ 99.775763] call 1: op=1 arg=[ffff8888a1a9eb10] result=-22 > ```
Juergen Gross
2023-Sep-14 05:43 UTC
[Pkg-xen-devel] Bug#1051862: (Debian) Bug#1051862: server flooded with xen_mc_flush warnings with xen 4.17 + linux 6.1
Hi Hans, On 13.09.23 23:38, Hans van Kranenburg wrote:> Hi Radoslav, > > Thanks for your report... > > Hi Juergen, Boris and xen-devel, > > At Debian, we got the report below. (Also at > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1051862) > > This hardware, with only Xen and Dom0 running is hitting the failed > multicall warning and logging in arch/x86/xen/multicalls.c. Can you help > advise what we can do to further debug this issue? > > Since this looks like pretty low level Xen/hardware stuff, I'd rather > ask upstream for directions first. If needed the Debian Xen Team can > assist the end user with the debugging process. > > Thanks, > > More reply inline... > > On 9/13/23 20:12, Radoslav Bod? wrote: >> Package: xen-system-amd64 >> Version: 4.17.1+2-gb773c48e36-1 >> Severity: important >> >> Hello, >> >> after upgrade from Bullseye to Bookworm one of our dom0's >> became unusable due to logs/system being continuously flooded >> with warnings from arch/x86/xen/multicalls.c:102 xen_mc_flush, and the >> system become unusable. >> >> The issue starts at some point where system services starts to come up, >> but nothing very special is on that box (dom0, nftables, fail2ban, >> prometheus-node-exporter, 3x domU). We have tried to disable all domU's >> and fail2ban as the name of the process would suggest, but issue is >> still present. We have tried also some other elaboration but none of >> them have helped so far: >> >> * the issue arise when xen 4.17 + linux >= 6.1 is booted >> * xen + bookworm-backports linux-image-6.4.0-0.deb12.2-amd64 have same isuue >> * without xen hypervisor, linux 6.1 runs just fine >> * systemrescue cd boot and xfs_repair rootfs did not helped >> * memtest seem to be fine running for hours > > Thanks for already trying out all these combinations. > >> As a workaround we have booted xen 4.17 + linux 5.10.0-25 (5.10.191-1) >> and the system is running fine as for last few months. >> >> Hardware: >> * Dell PowerEdge R750xs >> * 2x Intel Xeon Silver 4310 2.1G >> * 256GB RAM >> * PERC H755 Adapter, 12x 18TB HDDs > > I have a few quick additional questions already: > > 1. For clarification.. From your text, I understand that only this one > single server is showing the problem after the Debian version upgrade. > Does this mean that this is the only server you have running with > exactly this combination of hardware (and BIOS version, CPU microcode > etc etc)? Or, is there another one with same hardware which does not > show the problem? > > 2. Can you reply with the output of 'xl dmesg' when the problem happens? > Or, if the system gets unusable too quick, do you have a serial console > connection to capture the output? > > 3. To confirm... I understand that there are many of these messages. > Since you pasted only one, does that mean that all of them look exactly > the same, with "1 of 1 multicall(s) failed: cpu 10" "call 1: op=1 > arg=[ffff8888a1a9eb10] result=-22"? Or are there variations? If so, can > you reply with a few different ones? > > Since this very much looks like an issue of Xen related code where the > Xen hypervisor, dom0 kernel and hardware has to work together correctly, > (and not a Debian packaging problem) I'm already asking upstream for > advice about what we should/could do next, instead of trying to make a > guess myself. > > Thanks, > Hans > >> Any help, advice or bug confirmation would be appreciated >> >> Best regards >> bodik >> >> >> (log also in attachment) >> >> ``` >> kernel: [ 99.762402] WARNING: CPU: 10 PID: 1301 at >> arch/x86/xen/multicalls.c:102 xen_mc_flush+0x196/0x220 >> kernel: [ 99.762598] Modules linked in: nvme_fabrics nvme_core bridge >> xen_acpi_processor xen_gntdev stp llc xen_evtchn xenfs xen_privcmd >> binfmt_misc intel_rapl_msr ext4 intel_rapl_common crc16 >> intel_uncore_frequency_common mbcache ipmi_ssif jbd2 nfit libnvdimm >> ghash_clmulni_intel sha512_ssse3 sha512_generic aesni_intel acpi_ipmi >> nft_ct crypto_simd cryptd mei_me mgag200 ipmi_si iTCO_wdt intel_pmc_bxt >> ipmi_devintf drm_shmem_helper dell_smbios nft_masq iTCO_vendor_support >> isst_if_mbox_pci drm_kms_helper isst_if_mmio dcdbas mei intel_vsec >> isst_if_common dell_wmi_descriptor wmi_bmof watchdog pcspkr >> intel_pch_thermal ipmi_msghandler i2c_algo_bit acpi_power_meter button >> nft_nat joydev evdev sg nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 >> nf_defrag_ipv4 nf_tables nfnetlink drm fuse loop efi_pstore configfs >> ip_tables x_tables autofs4 xfs libcrc32c crc32c_generic hid_generic >> usbhid hid dm_mod sd_mod t10_pi crc64_rocksoft crc64 crc_t10dif >> crct10dif_generic ahci libahci xhci_pci libata xhci_hcd >> kernel: [ 99.762633] megaraid_sas tg3 crct10dif_pclmul >> crct10dif_common crc32_pclmul crc32c_intel bnxt_en usbcore scsi_mod >> i2c_i801 libphy i2c_smbus usb_common scsi_common wmi >> kernel: [ 99.764765] CPU: 10 PID: 1301 Comm: python3 Tainted: G >> W 6.1.0-12-amd64 #1 Debian 6.1.52-1 >> kernel: [ 99.764989] Hardware name: Dell Inc. PowerEdge R750xs/0441XG, >> BIOS 1.8.2 09/14/2022 >> kernel: [ 99.765214] RIP: e030:xen_mc_flush+0x196/0x220 >> kernel: [ 99.765436] Code: e2 06 48 01 da 85 c0 0f 84 23 ff ff ff 48 >> 8b 43 18 48 83 c3 40 48 c1 e8 3f 41 01 c5 48 39 d3 75 ec 45 85 ed 0f 84 >> 06 ff ff ff <0f> 0b e8 e3 6e a0 00 41 8b 14 24 44 89 ee 48 c7 c7 c0 ea >> 33 82 89 >> kernel: [ 99.765910] RSP: e02b:ffffc900412ffc60 EFLAGS: 00010082 >> kernel: [ 99.766152] RAX: ffffffffffffffea RBX: ffff8888a1a9e300 RCX: >> 0000000000000000 >> kernel: [ 99.766403] RDX: 0000000000000000 RSI: 0000000000000001 RDI: >> ffff8888a1a9eb10 >> kernel: [ 99.766653] RBP: 0000000080000002 R08: 0000000000000000 R09: >> 0000000000744f8b >> kernel: [ 99.766902] R10: 0000000000007ff0 R11: 0000000000000018 R12: >> ffff8888a1a9e300 >> kernel: [ 99.767153] R13: 0000000000000001 R14: ffffea0005130000 R15: >> ffffea0005130000 >> kernel: [ 99.767409] FS: 00007f59b5ba62c0(0000) >> GS:ffff8888a1a80000(0000) knlGS:0000000000000000 >> kernel: [ 99.767664] CS: 10000e030 DS: 0000 ES: 0000 CR0: >> 0000000080050033 >> kernel: [ 99.767918] CR2: 00007f59b2200000 CR3: 0000000141bd0000 CR4: >> 0000000000050660 >> kernel: [ 99.768181] Call Trace: >> kernel: [ 99.768436] <TASK> >> kernel: [ 99.768691] ? __warn+0x7d/0xc0 >> kernel: [ 99.768947] ? xen_mc_flush+0x196/0x220 >> kernel: [ 99.769204] ? report_bug+0xe6/0x170 >> kernel: [ 99.769460] ? handle_bug+0x41/0x70 >> kernel: [ 99.769713] ? exc_invalid_op+0x13/0x60 >> kernel: [ 99.769967] ? asm_exc_invalid_op+0x16/0x20 >> kernel: [ 99.770223] ? xen_mc_flush+0x196/0x220 >> kernel: [ 99.770478] xen_mc_issue+0x6d/0x70 >> kernel: [ 99.770726] xen_set_pmd_hyper+0x54/0x90 >> kernel: [ 99.770965] do_set_pmd+0x188/0x2a0This looks like an attempt to map a hugepage, which isn't supported when running as a Xen PV guest (this includes dom0). Are transparent hugepages enabled somehow? In a Xen PV guest there should be no /sys/kernel/mm/transparent_hugepage directory. Depending on the presence of that directory either hugepage_init() has a bug, or a test for hugepages being supported is missing in filemap_map_pages() or do_set_pmd().>> kernel: [ 99.771200] filemap_map_pages+0x1a9/0x6e0 >> kernel: [ 99.771434] xfs_filemap_map_pages+0x41/0x60 [xfs] >> kernel: [ 99.771714] do_fault+0x1a4/0x410 >> kernel: [ 99.771947] __handle_mm_fault+0x660/0xfa0 >> kernel: [ 99.772182] handle_mm_fault+0xdb/0x2d0 >> kernel: [ 99.772414] do_user_addr_fault+0x19c/0x570 >> kernel: [ 99.772643] exc_page_fault+0x70/0x170 >> kernel: [ 99.772873] asm_exc_page_fault+0x22/0x30 >> kernel: [ 99.773102] RIP: 0033:0x7f59b502cbe2 >> kernel: [ 99.773329] Code: 4d 8d 87 80 01 00 00 48 89 d9 45 31 c9 48 >> 85 ff 74 5c 66 0f 1f 44 00 00 49 8d b0 80 fe ff ff 31 d2 41 0f 18 08 f3 >> 0f 6f 04 11 <f3> 0f 6f 1c 16 f3 0f 6f 24 16 66 0f ef c3 66 0f 70 c8 31 >> 66 0f f4 >> kernel: [ 99.773806] RSP: 002b:00007ffc69923f70 EFLAGS: 00010246 >> kernel: [ 99.774040] RAX: 00007ffc69923fa0 RBX: 00007f59b5038000 RCX: >> 00007f59b5038000 >> kernel: [ 99.774276] RDX: 0000000000000000 RSI: 00007f59b21ffffa RDI: >> 0000000000000010 >> kernel: [ 99.774504] RBP: 00000000025b8f40 R08: 00007f59b220017a R09: >> 0000000000000000 >> kernel: [ 99.774727] R10: 0000000000077eca R11: 0000000000000400 R12: >> 00007f59b1a002fa >> kernel: [ 99.774947] R13: 00007ffc69923fe0 R14: 00007f59b5038080 R15: >> 00007f59b21ffffa >> kernel: [ 99.775161] </TASK> >> kernel: [ 99.775365] ---[ end trace 0000000000000000 ]--- >> kernel: [ 99.775567] 1 of 1 multicall(s) failed: cpu 10 >> kernel: [ 99.775763] call 1: op=1 arg=[ffff8888a1a9eb10] result=-22 >> ```Juergen -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0xB0DE9DD628BF132F.asc Type: application/pgp-keys Size: 3098 bytes Desc: OpenPGP public key URL: <http://alioth-lists.debian.net/pipermail/pkg-xen-devel/attachments/20230914/aed40aef/attachment.key> -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 495 bytes Desc: OpenPGP digital signature URL: <http://alioth-lists.debian.net/pipermail/pkg-xen-devel/attachments/20230914/aed40aef/attachment.sig>
Radoslav Bodó
2023-Sep-14 07:46 UTC
[Pkg-xen-devel] Bug#1051862: (Debian) Bug#1051862: server flooded with xen_mc_flush warnings with xen 4.17 + linux 6.1
Hi all, hopefully it's ok to reply-all at this point On 9/13/23 23:38, Hans van Kranenburg wrote:> I have a few quick additional questions already: > > 1. For clarification.. From your text, I understand that only this one > single server is showing the problem after the Debian version upgrade. > Does this mean that this is the only server you have running with > exactly this combination of hardware (and BIOS version, CPU microcode > etc etc)? Or, is there another one with same hardware which does not > show the problem?This is the unique HW combination in terms of server type Dell R750xs and CPU type 'Intel Xeon Silver 4310'> 2. Can you reply with the output of 'xl dmesg' when the problem happens? > Or, if the system gets unusable too quick, do you have a serial console > connection to capture the output?in attachment> 3. To confirm... I understand that there are many of these messages. > Since you pasted only one, does that mean that all of them look exactly > the same, with "1 of 1 multicall(s) failed: cpu 10" "call 1: op=1 > arg=[ffff8888a1a9eb10] result=-22"? Or are there variations? If so, can > you reply with a few different ones?all looks exacly same, only 1 of 1 multicalls failed with same result On 9/14/23 07:43, Juergen Gross wrote: >>> kernel: [ 99.768181] Call Trace: >>> kernel: [ 99.768436] <TASK> >>> kernel: [ 99.768691] ? __warn+0x7d/0xc0 >>> kernel: [ 99.768947] ? xen_mc_flush+0x196/0x220 >>> kernel: [ 99.769204] ? report_bug+0xe6/0x170 >>> kernel: [ 99.769460] ? handle_bug+0x41/0x70 >>> kernel: [ 99.769713] ? exc_invalid_op+0x13/0x60 >>> kernel: [ 99.769967] ? asm_exc_invalid_op+0x16/0x20 >>> kernel: [ 99.770223] ? xen_mc_flush+0x196/0x220 >>> kernel: [ 99.770478] xen_mc_issue+0x6d/0x70 >>> kernel: [ 99.770726] xen_set_pmd_hyper+0x54/0x90 >>> kernel: [ 99.770965] do_set_pmd+0x188/0x2a0 > > This looks like an attempt to map a hugepage, which isn't supported > when running as a Xen PV guest (this includes dom0). > > Are transparent hugepages enabled somehow? In a Xen PV guest there > should be no /sys/kernel/mm/transparent_hugepage directory. Depending > on the presence of that directory either hugepage_init() has a bug, or > a test for hugepages being supported is missing in filemap_map_pages() > or do_set_pmd(). > >>> kernel: [ 99.771200] filemap_map_pages+0x1a9/0x6e0 >>> kernel: [ 99.771434] xfs_filemap_map_pages+0x41/0x60 [xfs] >>> kernel: [ 99.771714] do_fault+0x1a4/0x410 >>> kernel: [ 99.771947] __handle_mm_fault+0x660/0xfa0 in faulty state (linux 6.1) and also in good state (linux 5.10), the directory /sys/kernel/mm/transparent_hugepage is not present we have also tried to boot with 'transparent_hugepage=never', but it make no difference best regards bodik -------------- next part -------------- (XEN) Xen version 4.17.2-pre (Debian 4.17.1+2-gb773c48e36-1) (pkg-xen-devel at lists.alioth.debian.org) (x86_64-linux-gnu-gcc (Debian 12.2.0-14) 12.2.0) debug=n Thu May 18 19:26:30 UTC 2023 (XEN) Bootloader: GRUB 2.06-13 (XEN) Command line: placeholder dom0_mem=32G,max:32G (XEN) Xen image load base address: 0x5e800000 (XEN) Video information: (XEN) VGA is text mode 80x25, font 8x16 (XEN) VBE/DDC methods: none; EDID transfer time: 0 seconds (XEN) EDID info not retrieved because no DDC retrieval method detected (XEN) Disc information: (XEN) Found 2 MBR signatures (XEN) Found 2 EDD information structures (XEN) Xen-e820 RAM map: (XEN) [0000000000000000, 0000000000098fff] (usable) (XEN) [0000000000099000, 000000000009ffff] (reserved) (XEN) [00000000000e0000, 00000000000fffff] (reserved) (XEN) [0000000000100000, 000000004a413fff] (usable) (XEN) [000000004a414000, 000000004b413fff] (ACPI NVS) (XEN) [000000004b414000, 000000004bfc2fff] (usable) (XEN) [000000004bfc3000, 000000004c0c8fff] (reserved) (XEN) [000000004c0c9000, 000000004cffffff] (usable) (XEN) [000000004d000000, 000000004d1fffff] (reserved) (XEN) [000000004d200000, 000000005eefdfff] (usable) (XEN) [000000005eefe000, 000000006e3fefff] (reserved) (XEN) [000000006e3ff000, 000000006f3fefff] (ACPI NVS) (XEN) [000000006f3ff000, 000000006f7fefff] (ACPI data) (XEN) [000000006f7ff000, 000000006f7fffff] (usable) (XEN) [000000006f800000, 000000008fffffff] (reserved) (XEN) [00000000fd000000, 00000000fe7fffff] (reserved) (XEN) [00000000fec00000, 00000000fec00fff] (reserved) (XEN) [00000000fec80000, 00000000fed00fff] (reserved) (XEN) [00000000fed40000, 00000000fed44fff] (reserved) (XEN) [00000000ff000000, 00000000ffffffff] (reserved) (XEN) [0000000100000000, 000000407fffffff] (usable) (XEN) ACPI: RSDP 000FE320, 0024 (r2 DELL ) (XEN) ACPI: XSDT 6F40A188, 00F4 (r1 DELL PE_SC3 0 DELL 1000013) (XEN) ACPI: FACP 6F7F6000, 0114 (r6 DELL PE_SC3 0 DELL 1) (XEN) ACPI: DSDT 6F770000, 7FAD3 (r2 DELL PE_SC3 3 DELL 1) (XEN) ACPI: FACS 6F373000, 0040 (XEN) ACPI: SSDT 6F7FB000, 1571 (r2 INTEL RAS_ACPI 1 INTL 20210331) (XEN) ACPI: SSDT 6F7FA000, 0745 (r2 INTEL ADDRXLAT 1 INTL 20210331) (XEN) ACPI: EINJ 6F7F9000, 0150 (r1 DELL PE_SC3 1 INTL 1) (XEN) ACPI: BERT 6F7F8000, 0030 (r1 DELL PE_SC3 1 INTL 1) (XEN) ACPI: ERST 6F7F7000, 0230 (r1 DELL PE_SC3 1 INTL 1) (XEN) ACPI: HMAT 6F7F5000, 0180 (r1 DELL PE_SC3 1 DELL 1) (XEN) ACPI: HPET 6F7F4000, 0038 (r1 DELL PE_SC3 1 DELL 1) (XEN) ACPI: MCFG 6F7F3000, 003C (r1 DELL PE_SC3 1 DELL 1) (XEN) ACPI: MIGT 6F7F2000, 0040 (r1 DELL PE_SC3 0 DELL 1) (XEN) ACPI: MSCT 6F7F1000, 0090 (r1 DELL PE_SC3 1 DELL 1) (XEN) ACPI: WSMT 6F7F0000, 0028 (r1 DELL PE_SC3 0 DELL 1) (XEN) ACPI: APIC 6F76F000, 035E (r4 DELL PE_SC3 0 DELL 1) (XEN) ACPI: SLIT 6F76E000, 0030 (r1 DELL PE_SC3 1 DELL 1000013) (XEN) ACPI: SRAT 6F767000, 6430 (r3 DELL PE_SC3 2 DELL 1000013) (XEN) ACPI: OEM4 6F5DF000, 187A61 (r2 INTEL CPU CST 3000 INTL 20210331) (XEN) ACPI: OEM1 6F4CB000, 113489 (r2 INTEL CPU EIST 3000 INTL 20210331) (XEN) ACPI: OEM2 6F484000, 46031 (r2 INTEL CPU HWP 3000 INTL 20210331) (XEN) ACPI: SSDT 6F40D000, 764A5 (r2 INTEL SSDT PM 4000 INTL 20210331) (XEN) ACPI: SSDT 6F40C000, 0AA3 (r2 DELL PE_SC3 0 DELL 1) (XEN) ACPI: HEST 6F40B000, 017C (r1 DELL PE_SC3 1 INTL 1) (XEN) ACPI: SSDT 6F7FD000, 0623 (r2 DELL Tpm2Tabl 1000 INTL 20210331) (XEN) ACPI: TPM2 6F409000, 004C (r4 DELL PE_SC3 2 DELL 1000013) (XEN) ACPI: SSDT 6F401000, 7299 (r2 INTEL SpsNm 2 INTL 20210331) (XEN) ACPI: SSDT 6F400000, 06EA (r2 DELL PE_SC3 2 DELL 1) (XEN) ACPI: DMAR 6F3FF000, 0188 (r1 DELL PE_SC3 1 DELL 1) (XEN) System RAM: 261595MB (267873864kB) (XEN) Domain heap initialised DMA width 32 bits (XEN) x2APIC mode is already enabled by BIOS. (XEN) ACPI: 32/64X FACS address mismatch in FADT - 6f373000/0000000000000000, using 32 (XEN) IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-119 (XEN) CPU0: TSC: ratio: 168 / 2 (XEN) CPU0: bus: 100 MHz base: 2100 MHz max: 3300 MHz (XEN) CPU0: 800 ... 2100 MHz (XEN) xstate: size: 0xa88 and states: 0x2e7 (XEN) Unrecognised CPU model 0x6a - assuming vulnerable to LazyFPU (XEN) Speculative mitigation facilities: (XEN) Hardware hints: RDCL_NO IBRS_ALL SKIP_L1DFL MDS_NO TAA_NO SBDR_SSDP_NO PSDP_NO (XEN) Hardware features: IBPB IBRS STIBP SSBD PSFD L1D_FLUSH MD_CLEAR TSX_CTRL FB_CLEAR FB_CLEAR_CTRL (XEN) Compiled-in support: INDIRECT_THUNK SHADOW_PAGING (XEN) Xen settings: BTI-Thunk JMP, SPEC_CTRL: IBRS+ STIBP+ SSBD- PSFD- TSX+, Other: IBPB-ctxt BRANCH_HARDEN (XEN) Support for HVM VMs: MSR_SPEC_CTRL MSR_VIRT_SPEC_CTRL RSB EAGER_FPU (XEN) Support for PV VMs: MSR_SPEC_CTRL EAGER_FPU (XEN) XPTI (64-bit PV only): Dom0 disabled, DomU disabled (with PCID) (XEN) PV L1TF shadowing: Dom0 disabled, DomU disabled (XEN) Using scheduler: SMP Credit Scheduler rev2 (credit2) (XEN) Initializing Credit2 scheduler (XEN) Platform timer is 24.000MHz HPET (XEN) Detected 2095.078 MHz processor. (XEN) Intel VT-d iommu 8 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 7 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 6 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 5 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 4 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 3 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 2 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d iommu 9 supported page sizes: 4kB, 2MB, 1GB (XEN) Intel VT-d Snoop Control enabled. (XEN) Intel VT-d Dom0 DMA Passthrough not enabled. (XEN) Intel VT-d Queued Invalidation enabled. (XEN) Intel VT-d Interrupt Remapping enabled. (XEN) Intel VT-d Posted Interrupt not enabled. (XEN) Intel VT-d Shared EPT tables enabled. (XEN) I/O virtualisation enabled (XEN) - Dom0 mode: Relaxed (XEN) Interrupt remapping enabled (XEN) Enabled directed EOI with ioapic_ack_old on! (XEN) Enabling APIC mode: Clustered. Using 1 I/O APICs (XEN) ENABLING IO-APIC IRQs (XEN) -> Using old ACK method (XEN) Allocated console ring of 128 KiB. (XEN) VMX: Supported advanced features: (XEN) - APIC MMIO access virtualisation (XEN) - APIC TPR shadow (XEN) - Extended Page Tables (EPT) (XEN) - Virtual-Processor Identifiers (VPID) (XEN) - Virtual NMI (XEN) - MSR direct-access bitmap (XEN) - Unrestricted Guest (XEN) - APIC Register Virtualization (XEN) - Virtual Interrupt Delivery (XEN) - Posted Interrupt Processing (XEN) - VMCS shadowing (XEN) - VM Functions (XEN) - Virtualisation Exceptions (XEN) - Page Modification Logging (XEN) - TSC Scaling (XEN) - Bus Lock Detection (XEN) HVM: ASIDs enabled. (XEN) HVM: VMX enabled (XEN) HVM: Hardware Assisted Paging (HAP) detected (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB (XEN) Brought up 48 CPUs (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource (XEN) Initializing Credit2 scheduler (XEN) Dom0 has maximum 1368 PIRQs (XEN) Xen kernel: 64-bit, lsb (XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4a00000 (XEN) PHYSICAL MEMORY ARRANGEMENT: (XEN) Dom0 alloc.: 0000004020000000->0000004028000000 (8345580 pages to be allocated) (XEN) Init. ramdisk: 000000407d7ec000->000000407ffff69e (XEN) VIRTUAL MEMORY ARRANGEMENT: (XEN) Loaded kernel: ffffffff81000000->ffffffff84a00000 (XEN) Phys-Mach map: 0000008000000000->0000008004000000 (XEN) Start info: ffffffff84a00000->ffffffff84a004b8 (XEN) Page tables: ffffffff84a01000->ffffffff84a2a000 (XEN) Boot stack: ffffffff84a2a000->ffffffff84a2b000 (XEN) TOTAL: ffffffff80000000->ffffffff84c00000 (XEN) ENTRY ADDRESS: ffffffff830721c0 (XEN) Dom0 has maximum 48 VCPUs (XEN) Initial low memory virq threshold set at 0x4000 pages. (XEN) Scrubbing Free RAM in background (XEN) Std. Loglevel: Errors and warnings (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings) (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input) (XEN) Freed 624kB init memory