Hi, By xentrace, I saw there are many offline states in VCPU scheduling, more in HVMs than PV kernels. The comments say that offline means the VCPU is not runnable but not blocked (like hotplug and pauses by the system administrator or for critical sections in the hypervisor). Can anyone explain it to me what are the circumstances except for hotplug and pauses by the system administrator? And what are the critical sections in the hypervisor? Thanks in advance, Shawn _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, Jun 28, 2010 at 4:59 AM, Yuyang Du <duyuyang@gmail.com> wrote:> Hi, > > By xentrace, I saw there are many offline states in VCPU scheduling, more in HVMs than PV kernels. The comments say that offline means the VCPU is not runnable but not blocked (like hotplug and pauses by the system administrator or for critical sections in the hypervisor). > > Can anyone explain it to me what are the circumstances except for hotplug and pauses by the system administrator? And what are the critical sections in the hypervisor?Essentially, offline means "Not runnable for any reason other than the guest voluntarily blocking" (either with the SCHED_block hypercall or executing the HLT instruction). The main reason for HVM vcpus going offline is to do QEMU-related I/O: the guest vcpu executes the PIO or MMIO instruction, it traps to Xen, Xen decides to pass it on to QEMU. So it pauses the vcpu, and sends info about the instruction to QEMU. QEMU in dom0 wakes up, handles the event, and completes the I/O, making the vcpu runnable again. When the vcpu runs again, Xen moves the IP forward and does whatever is appropriate to the registers (e.g., fills them with the value of the read supplied by QEMU). If you use xenalyze (http://xenbits.xensource.com/ext/xenalyze.hg), it will give you a break-down of exactly how much time is spent per vcpu in running, runnable, blocked, and offline states. It will also tell you how much time is spent handling MMIO and PIO per vcpu. The time spent in offline state for HVM domains should closely correlate to the time offline. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks. Is it right to say the time HVM waits for I/O equals the offline time (given no administrator pauses)? I am testing apache web server in a HVM, and I find that vcpu blocking state makes up a large portion. Since the HVM can not issue SCHED_block hypercalls, so the blocking state means the VM is not cpu intensive and often executes HLT to halt itself? Regards, Shawn>On Mon, Jun 28, 2010 at 4:59 AM, Yuyang Du <duyuyang@gmail.com> wrote: >> Hi, >> >> By xentrace, I saw there are many offline states in VCPU scheduling, more in HVMs than PV kernels. The comments say that offline means the VCPU is not runnable but not blocked (like hotplug and pauses by the system administrator or for critical sections in the hypervisor). >> >> Can anyone explain it to me what are the circumstances except for hotplug and pauses by the system administrator? And what are the critical sections in the hypervisor? > >Essentially, offline means "Not runnable for any reason other than the >guest voluntarily blocking" (either with the SCHED_block hypercall or >executing the HLT instruction). > >The main reason for HVM vcpus going offline is to do QEMU-related I/O: >the guest vcpu executes the PIO or MMIO instruction, it traps to Xen, >Xen decides to pass it on to QEMU. So it pauses the vcpu, and sends >info about the instruction to QEMU. QEMU in dom0 wakes up, handles >the event, and completes the I/O, making the vcpu runnable again. >When the vcpu runs again, Xen moves the IP forward and does whatever >is appropriate to the registers (e.g., fills them with the value of >the read supplied by QEMU). > >If you use xenalyze (http://xenbits.xensource.com/ext/xenalyze.hg), it >will give you a break-down of exactly how much time is spent per vcpu >in running, runnable, blocked, and offline states. It will also tell >you how much time is spent handling MMIO and PIO per vcpu. The time >spent in offline state for HVM domains should closely correlate to the >time offline. > > -George_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 29/06/10 12:42, Yuyang Du wrote:> Is it right to say the time HVM waits for I/O equals the offline time (given no administrator pauses)?There are some I/O events that are handled inside of Xen (such as APIC accesses); these don''t cause a vcpu to be paused. I think in the normal course of operation, an HVM vcpu is only paused when doing I/O. Other reasons might be administrator pause, migration, save/restore, domain creation, memory sharing / page swapping, and so on. But if you aren''t doing any of those, I think I/O done in QEMU would be the only reason.> I am testing apache web server in a HVM, and I find that vcpu blocking state makes up a large portion. Since the HVM can not issue SCHED_block hypercalls, so the blocking state means the VM is not cpu intensive and often executes HLT to halt itself?Yes. If you take a trace to include VMX / SVM events, and use xenalyze, you should be able to see the HLT vmexit before blocking. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel