Xu, Jiajun
2010-Jun-07 15:07 UTC
[Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
Hi all, This is our bi-weekly test report for Xen-unstable tree. There are 3 new bugs found in this two weeks. Save/Restore and Migration cannot work. CPU panic when destroying guest. On 32b system, CPU will panic when host reboot. Due to the guest destroy issue, we cannot test latest xen changeset 21508. This report is based on changeset 21503. For bug fixing, the three new bugs reported in last report are all fixed. XenU and Xen hang issue on 64b platform both fixed. We use Pv_ops(xen/master, 2.6.31.13) as Dom0 in our testing. Status Summary ===================================================================Feature Result ------------------------------------------------------ VT-x/VT-x2 PASS RAS Buggy VT-d Buggy SR-IOV Buggy TXT PASS PowerMgmt PASS Other Buggy New Bugs (3): ===================================================================1. cpu panic when rebooting system on PAE host http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1623 2. CPU panic when destroying guest http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1622 3. xm save and xm migrate can''t work http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1621 Fixed Bugs (3) ===================================================================1. xen hypervisor hang when create guest on 32e platform http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1617 2. CPU panic when running cpu offline http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1616 3. xenu guest can''t boot up http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1618 Old P1 Bugs (1): ====================================================================1. stubdom based guest hangs at starting when using qcow image. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1372 Old P2 Bugs (12) ====================================================================1. Failed to install FC10 http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1461 2. Two onboard 82576 NICs assigned to HVM guest cannot work stable if use INTx interrupt. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1459 3. stubdom based guest hangs when assigning hdc to it. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1373 4. [stubdom]The xm save command hangs while saving <Domain-dm>. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1377 5. [stubdom] cannot restore stubdom based domain. http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1378 6. Live Migration with md5sum running cause dma_timer_expiry error in guest http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1530 7. Very slow mouse/keyboard and no USB thumbdrive detected w/Core i7 & Pvops http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1541 8. Linux guest boots up very slow with SDL rendering http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1478 9. [RAS] CPUs are not in the correct NUMA node after hot-add memory http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1573 10. [SR-IOV] Qemu report pci_msix_writel error while assigning VF to guest http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1575 11. Can''t create guest with big memory if do not limit Dom0 memory http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1604 12. Add fix for TBOOT/Xen and S3 flow http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1611 Xen Info: ===========================================================================Service OS : Red Hat Enterprise Linux Server release 5.3 (Tikanga) xen-changeset: 21503:267ecb2ee5bf pvops git: commit e4612f9768065b6692dee07af26a7511e936e099 Merge: a3e7c7b... 7ec9d1e... Author: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> ioemu git: commit ffb0cf2ad55e952dae55e6166c4fcea79be6cd30 Author: Ian Jackson <ian.jackson@eu.citrix.com> Date: Thu Apr 15 17:01:15 2010 +0100 Test Environment: =========================================================================Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga) Hardware : Westmere-HEDT 32e ummary Test Report ======================================================================================================================================== Total Pass Fail NoResult Crash ====================================================================vtd_ept_vpid 13 9 4 0 0 control_panel_ept_vpid 13 8 5 0 0 ras_ept_vpid 1 0 1 0 0 gtest_ept_vpid 21 21 0 0 0 acpi_ept_vpid 5 0 4 0 1 sriov_ept_vpid 3 1 1 0 1 ====================================================================vtd_ept_vpid 13 9 4 0 0 :two_dev_smp_nomsi_64_g3 1 1 0 0 0 :two_dev_scp_64_g32e 1 1 0 0 0 :lm_pcie_up_64_g32e 1 0 1 0 0 :lm_pcie_smp_64_g32e 1 0 1 0 0 :hp_pci_up_64_g32e 1 1 0 0 0 :two_dev_up_64_g32e 1 1 0 0 0 :two_dev_up_nomsi_64_g32 1 1 0 0 0 :lm_pci_up_nomsi_64_g32e 1 0 1 0 0 :two_dev_smp_64_g32e 1 1 0 0 0 :one_pcie_smp_nomsi_64_g 1 1 0 0 0 :two_dev_scp_nomsi_64_g3 1 1 0 0 0 :lm_pci_smp_nomsi_64_g32 1 0 1 0 0 :one_pcie_smp_64_g32e 1 1 0 0 0 control_panel_ept_vpid 13 8 5 0 0 :XEN_1500M_guest_64_g32e 1 1 0 0 0 :XEN_256M_guest_64_gPAE 1 1 0 0 0 :XEN_256M_xenu_64_gPAE 1 0 1 0 0 :XEN_LM_Continuity_64_g3 1 0 1 0 0 :XEN_LM_SMP_64_g32e 1 0 1 0 0 :XEN_vmx_vcpu_pin_64_g32 1 1 0 0 0 :XEN_linux_win_64_g32e 1 1 0 0 0 :XEN_SR_Continuity_64_g3 1 0 1 0 0 :XEN_vmx_2vcpu_64_g32e 1 1 0 0 0 :XEN_1500M_guest_64_gPAE 1 1 0 0 0 :XEN_two_winxp_64_g32e 1 1 0 0 0 :XEN_256M_guest_64_g32e 1 1 0 0 0 :XEN_SR_SMP_64_g32e 1 0 1 0 0 ras_ept_vpid 1 0 1 0 0 :cpu_online_offline_64_g 1 0 1 0 0 gtest_ept_vpid 21 21 0 0 0 :reboot_xp_64_g32e 1 1 0 0 0 :boot_solaris10u5_64_g32 1 1 0 0 0 :boot_up_vista_64_g32e 1 1 0 0 0 :boot_indiana_64_g32e 1 1 0 0 0 :boot_up_acpi_xp_64_g32e 1 1 0 0 0 :boot_smp_win7_ent_64_g3 1 1 0 0 0 :boot_smp_acpi_xp_64_g32 1 1 0 0 0 :boot_up_acpi_64_g32e 1 1 0 0 0 :boot_up_win2008_64_g32e 1 1 0 0 0 :boot_base_kernel_64_g32 1 1 0 0 0 :kb_nightly_64_g32e 1 1 0 0 0 :boot_up_acpi_win2k3_64_ 1 1 0 0 0 :boot_nevada_64_g32e 1 1 0 0 0 :boot_fc9_64_g32e 1 1 0 0 0 :ltp_nightly_64_g32e 1 1 0 0 0 :boot_smp_vista_64_g32e 1 1 0 0 0 :boot_smp_win2008_64_g32 1 1 0 0 0 :boot_smp_win7_ent_debug 1 1 0 0 0 :boot_smp_acpi_win2k3_64 1 1 0 0 0 :boot_rhel5u1_64_g32e 1 1 0 0 0 :reboot_fc6_64_g32e 1 1 0 0 0 acpi_ept_vpid 5 0 4 0 1 :hvm_s3_smp_sr_64_g32e 1 0 1 0 0 :monitor_c_status_64_g32 1 0 1 0 0 :hvm_s3_smp_64_g32e 1 0 1 0 0 :Dom0_S3_64_g32e 1 0 0 0 1 :monitor_p_status_64_g32 1 0 1 0 0 sriov_ept_vpid 3 1 1 0 1 :serial_vfs_smp_64_g32e 1 0 1 0 0 :one_vf_smp_win2k8_64_g3 1 0 0 0 1 :one_vf_smp_64_g32e 1 1 0 0 0 ====================================================================Total 56 39 15 0 2 Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga) Hardware : Stoakley 32e Summary Test Report ==================================================================== Total Pass Fail NoResult Crash ====================================================================vtd_ept_vpid 13 7 6 0 0 control_panel_ept_vpid 12 7 5 0 0 ras_ept_vpid 1 0 0 0 1 gtest_ept_vpid 23 22 1 0 0 acpi_ept_vpid 3 0 0 0 3 ====================================================================vtd_ept_vpid 13 7 6 0 0 :two_dev_scp_nomsi_PAE_g 1 1 0 0 0 :lm_pci_up_nomsi_PAE_gPA 1 0 1 0 0 :one_pcie_smp_PAE_gPAE 1 1 0 0 0 :two_dev_up_PAE_gPAE 1 1 0 0 0 :lm_pcie_smp_PAE_gPAE 1 0 1 0 0 :two_dev_smp_nomsi_PAE_g 1 1 0 0 0 :two_dev_smp_PAE_gPAE 1 1 0 0 0 :one_pcie_smp_nomsi_PAE_ 1 1 0 0 0 :hp_pci_up_PAE_gPAE 1 0 1 0 0 :two_dev_up_nomsi_PAE_gP 1 1 0 0 0 :two_dev_scp_PAE_gPAE 1 0 1 0 0 :lm_pci_smp_nomsi_PAE_gP 1 0 1 0 0 :lm_pcie_up_PAE_gPAE 1 0 1 0 0 control_panel_ept_vpid 12 7 5 0 0 :XEN_4G_guest_PAE_gPAE 1 1 0 0 0 :XEN_linux_win_PAE_gPAE 1 1 0 0 0 :XEN_SR_SMP_PAE_gPAE 1 0 1 0 0 :XEN_LM_SMP_PAE_gPAE 1 0 1 0 0 :XEN_SR_Continuity_PAE_g 1 0 1 0 0 :XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0 :XEN_LM_Continuity_PAE_g 1 0 1 0 0 :XEN_256M_guest_PAE_gPAE 1 0 1 0 0 :XEN_1500M_guest_PAE_gPA 1 1 0 0 0 :XEN_256M_xenu_PAE_gPAE 1 1 0 0 0 :XEN_two_winxp_PAE_gPAE 1 1 0 0 0 :XEN_vmx_2vcpu_PAE_gPAE 1 1 0 0 0 ras_ept_vpid 1 0 0 0 1 :cpu_online_offline_PAE_ 1 0 0 0 1 gtest_ept_vpid 23 22 1 0 0 :ltp_nightly_PAE_gPAE 1 1 0 0 0 :boot_up_acpi_PAE_gPAE 1 1 0 0 0 :reboot_xp_PAE_gPAE 1 1 0 0 0 :boot_up_acpi_xp_PAE_gPA 1 0 1 0 0 :boot_up_vista_PAE_gPAE 1 1 0 0 0 :boot_fc9_PAE_gPAE 1 1 0 0 0 :boot_smp_win7_ent_PAE_g 1 1 0 0 0 :boot_up_acpi_win2k3_PAE 1 1 0 0 0 :boot_smp_acpi_win2k3_PA 1 1 0 0 0 :boot_smp_acpi_xp_PAE_gP 1 1 0 0 0 :boot_smp_win7_ent_debug 1 1 0 0 0 :boot_smp_vista_PAE_gPAE 1 1 0 0 0 :boot_up_noacpi_win2k3_P 1 1 0 0 0 :boot_nevada_PAE_gPAE 1 1 0 0 0 :boot_rhel5u1_PAE_gPAE 1 1 0 0 0 :boot_indiana_PAE_gPAE 1 1 0 0 0 :boot_solaris10u5_PAE_gP 1 1 0 0 0 :boot_base_kernel_PAE_gP 1 1 0 0 0 :boot_up_win2008_PAE_gPA 1 1 0 0 0 :boot_up_noacpi_xp_PAE_g 1 1 0 0 0 :boot_smp_win2008_PAE_gP 1 1 0 0 0 :reboot_fc6_PAE_gPAE 1 1 0 0 0 :kb_nightly_PAE_gPAE 1 1 0 0 0 acpi_ept_vpid 3 0 0 0 3 :Dom0_S3_PAE_gPAE 1 0 0 0 1 :hvm_s3_smp_64_gPAE 1 0 0 0 1 :hvm_s3_smp_sr_64_gPAE 1 0 0 0 1 ====================================================================Total 52 36 12 0 4 Best Regards, Jiajun _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Jun-07 15:26 UTC
Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
On 07/06/2010 16:07, "Xu, Jiajun" <jiajun.xu@intel.com> wrote:> 1. cpu panic when rebooting system on PAE host > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1623I don''t think this can really be a new bug. I will look into a fix for it however.> 2. CPU panic when destroying guest > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1622This is fixed already (c/s 21511).> 3. xm save and xm migrate can''t work > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1621Given this bug is introduced between changesets 21468 and 21492, it can only really be introduced by Ian Jackson''s logging patches. I''ve cc''ed Ian. He should be able to help get this regression fixed. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Jun-07 15:45 UTC
Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
On 07/06/2010 16:26, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:>> 1. cpu panic when rebooting system on PAE host >> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1623 > > I don''t think this can really be a new bug. I will look into a fix for it > however.Fixed by xen-unstable:21550. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Jun-07 17:12 UTC
Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
Keir Fraser writes ("Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f..."):> On 07/06/2010 16:07, "Xu, Jiajun" <jiajun.xu@intel.com> wrote: > > 3. xm save and xm migrate can''t work > > http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1621 > > Given this bug is introduced between changesets 21468 and 21492, it can only > really be introduced by Ian Jackson''s logging patches. I''ve cc''ed Ian. He > should be able to help get this regression fixed.Based on the error message I agree that my patches are a likely culprit. I''m not sure why this is happening to Jiajun but not to me (I did do some simple tests with xm before sending my patch and it worked at least as well afterwards as before.) Jiajun: are you using the not-quite-XAPI protocol or traditional XMLRPC ? (Do you have "xen-api-server" set in xend-config.sxp?) What does xend.log say ? There may well be a stack trace. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Xu, Jiajun
2010-Jun-09 01:31 UTC
RE: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
> On 07/06/2010 16:07, "Xu, Jiajun" <jiajun.xu@intel.com> wrote: > >> 1. cpu panic when rebooting system on PAE host >> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1623 > > I don''t think this can really be a new bug. I will look into a fix for it however. > >> 2. CPU panic when destroying guest >> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1622 > > This is fixed already (c/s 21511).Thanks, we verified it fixed in c/s 21511.> >> 3. xm save and xm migrate can''t work >> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1621 > > Given this bug is introduced between changesets 21468 and 21492, it > can only really be introduced by Ian Jackson''s logging patches. I''ve > cc''ed Ian. He should be able to help get this regression fixed. > > -- KeirBest Regards, Jiajun _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Xu, Jiajun
2010-Jun-09 01:34 UTC
RE: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
> Keir Fraser writes ("Re: [Xen-devel] Biweekly VMX status report. Xen: > #21503 & > Xen0: #e4612f..."): >> On 07/06/2010 16:07, "Xu, Jiajun" <jiajun.xu@intel.com> wrote: >>> 3. xm save and xm migrate can''t work >>> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1621 >> >> Given this bug is introduced between changesets 21468 and 21492, it >> can only really be introduced by Ian Jackson''s logging patches. I''ve >> cc''ed Ian. He should be able to help get this regression fixed. > > Based on the error message I agree that my patches are a likely > culprit. I''m not sure why this is happening to Jiajun but not to me > (I did do some simple tests with xm before sending my patch and it > worked at least as well afterwards as before.) > > Jiajun: are you using the not-quite-XAPI protocol or traditional > XMLRPC ? (Do you have "xen-api-server" set in xend-config.sxp?) > > What does xend.log say ? There may well be a stack trace.We are not using "xen-api-server" in xend-config.sxp. The xend log says: ------------ [2010-06-01 10:21:34 5009] DEBUG (XendDomainInfo:1806) Storing domain details: {''console/port'': ''6'', ''cpu/3/availability'': ''online'', ''description'': '''', ''console/limit'': ''1048576'', ''store/port'': ''5'', ''cpu/2/availability'': ''online'', ''vm'': ''/vm/cf94ada4-58bc-c508-f1e4-e4968f373125'', ''domid'': ''3'', ''image/suspend-cancel'': ''1'', ''cpu/0/availability'': ''online'', ''memory/target'': ''262144'', ''control/platform-feature-multiprocessor-suspend'': ''1'', ''store/ring-ref'': ''1044476'', ''cpu/1/availability'': ''online'', ''console/type'': ''ioemu'', ''name'': ''migrating-vCPL_LM_40_1275357679''} [2010-06-01 10:21:34 5009] DEBUG (XendDomainInfo:1806) Storing domain details: {''console/port'': ''6'', ''cpu/3/availability'': ''online'', ''description'': '''', ''console/limit'': ''1048576'', ''store/port'': ''5'', ''cpu/2/availability'': ''online'', ''vm'': ''/vm/cf94ada4-58bc-c508-f1e4-e4968f373125'', ''domid'': ''3'', ''image/suspend-cancel'': ''1'', ''cpu/0/availability'': ''online'', ''memory/target'': ''262144'', ''control/platform-feature-multiprocessor-suspend'': ''1'', ''store/ring-ref'': ''1044476'', ''cpu/1/availability'': ''online'', ''console/type'': ''ioemu'', ''name'': ''vCPL_LM_40_1275357679''} [2010-06-01 10:24:18 5009] DEBUG (XendCheckpoint:124) [xc_save]: /usr/lib64/xen/bin/xc_save 57 3 0 0 4 [2010-06-01 10:24:18 5009] ERROR (XendCheckpoint:178) Save failed on domain vCPL_LM_40_1275357679 (3) - resuming. Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 146, in save forkHelper(cmd, fd, saveInputHandler, False) File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 378, in forkHelper child = xPopen3(cmd, True, -1, [fd, xc.handle()]) AttributeError: handle [2010-06-01 10:24:18 5009] DEBUG (XendDomainInfo:3147) XendDomainInfo.resumeDomain(3) [2010-06-01 10:24:18 5009] ERROR (xmlrpclib2:178) Internal error handling xend.domain.save Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/util/xmlrpclib2.py", line 131, in _marshaled_dispatch response = self._dispatch(method, params) File "/usr/lib64/python2.4/SimpleXMLRPCServer.py", line 406, in _dispatch return func(*params) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 1519, in domain_save raise e AttributeError: handle ------------------------------- Best Regards, Jiajun _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2010-Jun-09 06:57 UTC
Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
On 09/06/2010 02:34, "Xu, Jiajun" <jiajun.xu@intel.com> wrote:> Traceback (most recent call last): > File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line > 146, in save > forkHelper(cmd, fd, saveInputHandler, False) > File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line > 378, in forkHelper > child = xPopen3(cmd, True, -1, [fd, xc.handle()]) > AttributeError: handleThis should be fixed by xen-unstable:21570 -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Jackson
2010-Jun-09 09:49 UTC
Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f...
Keir Fraser writes ("Re: [Xen-devel] Biweekly VMX status report. Xen: #21503 & Xen0: #e4612f..."):> This should be fixed by xen-unstable:21570Thanks. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel