Dustin Black
2009-Aug-06 22:03 UTC
[Xen-users] High iowait in dom0 after creating a new guest, with file-based domU disk on GFS
Hopefully I summed up the jist of the problem in the subject line. ;) I have a GFS cluster with ten Xen 3.0 dom0s sharing an iSCSI LUN. There are on average 8 domUs running on each Xen server. The dom0 on each server is hard-coded to 2 CPUs and 2GB of RAM with no ballooning, and has 2GB of partition-based swap. When creating a new domU on any of the Xen servers, just after the completion of the operating system kickstart install as the new domU completes its shutdown process, the dom0 will begin experiencing high load and an iowait that consumes 100% of one of the CPUs. This will last for several minutes (10-15 generally), and the shutdown of the new domU will usually hang during this time (xm console will not exit, and the domain will continue running until the iowait drops). No swap space will be consumed by the dom0 OS, and no major disk activity will be apparent in iostat. Watching the threads in top, the xenstored processes will consistently hang out at the top of the process list, but without apparently consuming CPU time (%cpu shows 0). Nothing of much interest seems to show up in the syslog. dom0 syslog during the domU install time (minus audit entries): Aug 6 17:40:25 xen03 kernel: device vif11.0 entered promiscuous mode Aug 6 17:40:25 xen03 kernel: ADDRCONF(NETDEV_UP): vif11.0: link is not ready Aug 6 17:40:34 xen03 kernel: ADDRCONF(NETDEV_CHANGE): vif11.0: link becomes ready Aug 6 17:40:34 xen03 kernel: xenbr11: topology change detected, propagating Aug 6 17:40:34 xen03 kernel: xenbr11: port 9(vif11.0) entering forwarding state Aug 6 17:40:34 xen03 kernel: blkback: ring-ref 522, event-channel 10, protocol 2 (x86_32-abi) Aug 6 17:43:26 xen03 kernel: xenbr11: port 9(vif11.0) entering disabled state Aug 6 17:43:26 xen03 kernel: device vif11.0 left promiscuous mode Aug 6 17:43:26 xen03 kernel: xenbr11: port 9(vif11.0) entering disabled state (-- hang happens about here, and no further syslog messages are recorded after iowait trails off--) Any thoughts? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Possibly Parallel Threads
- Extremely high iowait
- Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
- 100% iowait in domU with no IO tasks.
- Bugs in 3Ware 9.2? -- FWD: [Bug 121434] Extremely high iowait with 3Ware array and moderate disk activity
- Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2