Li, Haicheng
2008-Jan-16 07:17 UTC
[Xen-devel] VMX status report. Xen: #16720 & Xen0: #379 -- no new issue.
Hi all,
This is today''s nightly testing report, no new issue found.
Old Issues:
=============================================1) [Guest Test] SMP VISTA HVM guest
might be blue screen when doing
large I/O in dom0.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1117
2) HVM domain performance would downgrade, after doing save/restore.
3) FC7/FC8 guest continuously pops up error information.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1136
4) XenU guest will hang if booted just after destorying a HVM guest.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1139
Testing Environments:
=============================================PAE
CPU : Xeon(r) Processor 7000 series
Dom0 OS : Fedora6
Memory size : 8G
IA32e
CPU : Xeon(r) processor 5300 series
Dom0 OS : RHEL4u3
Memory size : 8G
VTD(PAE)
Service OS : RHEL5
Platform : Core 2 Duo with vt-d supported
Dom0 S3
Service OS : RHEL5.1
Platform : Core 2 Duo
Details: (The failed cases can get pass by manually retesting)
=============================================Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : Paxville
Xen package: 16720:d13c4d2836a8
Date: Wed Jan 16 08:49:20 CST 2008
1. 2 PAE SMP VMX domains and 2 xenU domains coexist FAIL
2. one PAE SMP Linux VMX domain with memory 4G PASS
3. Live migration PASS
4. boot a pae on x86_64 xenU PASS
5. boot 4 VMX per processor at the same time PASS
6. boot up 1 winXP VMX and 1 linux PAE VMX PASS
7. Save and Restore PASS
8. Single domain with single vcpu bind a CPU PASS
9. one PAE SMP Linux VMX domain with memory 1500M PASS
10. boot up two winXP per processor at the same time PASS
11. boot up one linux VMX with 4 vcpus PASS
12. boot up four linux VMX one by one PASS
13. boot up VMX with acpi=1, vcpu=1 PASS
14. subset LTP test in RHEL4U2 PAE SMP VMX domain PASS
15. boot up the winxp image with vcpus=1 and acpi=1 PASS
16. boot up the vista image with vcpus=1 PASS
17. one IA32 UP ACPI Windows 2K3 VMX domain PASS
18. Boot Linux 2.6.23 base kernel in RHEL4U1 PAE SMP Linux VMX domain PASS
19. one IA32 SMP ACPI Windows 2K3 VMX domain PASS
20. one IA32 SMP ACPI Windows 2K VMX domain PASS
21. one IA32 UP ACPI Windows 2K VMX domain PASS
22. one IA32 SMP ACPI Windows XP VMX domain PASS
23. one IA32 UP NOACPI Windows 2K VMX domain PASS
24. one IA32 UP NOACPI Windows 2K3 VMX domain PASS
25. one IA32 UP NOACPI Windows XP VMX domain PASS
26. kernel build in one linux VMX PASS
27. VBD and VNIF works on UP VMX domain PASS
28. VBD and VNIF works on SMP VMX domain FAIL
29. startx in dom0 PASS
30. boot up one IA32 RHEL5u1 VMX domain. PASS
31. reboot Windows xp after it boot up. PASS
32. reboot Fedora core 6 after it boot up. PASS
33. assign one pcie nic to one UP Linux guest with vtd. PASS
34. assign one pcie nic to one SMP Linux guest with vtd. PASS
35. assign one pcie nic to one UP WinXP guest with vtd. PASS
36. assign one pci nic to one SMP Linux guest with vtd. PASS
37. assign one pci nic to one UP WinXP guest with vtd. PASS
38. assign one pci nic to one UP Linux guest with vtd PASS
39. scp a big file in Linux guest via the pci nic assigned with vt-d. PASS
40. assign one pcie nic to one SMP WinXP guest with vtd. PASS
41. assign one pci nic to one SMP WinXP guest with vtd. PASS
42. scp a big file in Linux guest via the pcie nic assigned with vt-d. PASS
43. RTC wakes up Dom0-S3 PASS
44. SMP-Dom0 CPU number does not change after S3 waking PASS
45. Dom0 Free memory info does not change much after S3 waking PASS
46. Dom0 continuously does S3 for 20 times CRASH
47. Dom0 Timer check after S3 resuming PASS
Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : Paxville
Xen package: 16720:d13c4d2836a8
Date: Wed Jan 16 08:49:20 CST 2008
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================device_model
2 1 1 0 0
control_panel 12 11 1 0 0
Restart 1 1 0 0 0
gtest 18 18 0 0 0
====================================================================device_model
2 1 1 0 0
:pv_on_up_PAE_gPAE 1 1 0 0 0
:pv_on_smp_PAE_gPAE 1 0 1 0 0
control_panel 12 11 1 0 0
:XEN_4G_guest_PAE_gPAE 1 1 0 0 0
:XEN_four_vmx_xenu_seq_P 1 0 1 0 0
:XEN_LM_PAE_gPAE 1 1 0 0 0
:XEN_four_dguest_co_PAE_ 1 1 0 0 0
:XEN_linux_win_PAE_gPAE 1 1 0 0 0
:XEN_SR_PAE_gPAE 1 1 0 0 0
:XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0
:XEN_1500M_guest_PAE_gPA 1 1 0 0 0
:XEN_256M_guest_PAE_gPAE 1 1 0 0 0
:XEN_two_winxp_PAE_gPAE 1 1 0 0 0
:XEN_four_sguest_seq_PAE 1 1 0 0 0
:XEN_vmx_4vcpu_PAE_gPAE 1 1 0 0 0
Restart 1 1 0 0 0
:GuestPAE_PAE_gPAE 1 1 0 0 0
gtest 18 18 0 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_up_noacpi_win2k3_P 1 1 0 0 0
:boot_rhel5u1_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_noacpi_xp_PAE_g 1 1 0 0 0
:bootx_PAE_gPAE 1 1 0 0 0
:reboot_fc6_PAE_gPAE 1 1 0 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
====================================================================Total
33 31 2 0 0
Platform : x86_64
Service OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
Hardware : Clovertown
Xen package: 16720:d13c4d2836a8
Date: Wed Jan 16 11:26:52 CST 2008
1. 2 ia32e SMP VMX domains and 2 xenU domains coexist FAIL
2. one ia32e SMP Linux VMX domain with memory 4G PASS
3. Live migration PASS
4. one xenU domain with memory 256M FAIL
5. boot 4 VMX per processor at the same time PASS
6. boot up 1 winXP VMX and 1 linux VMX PASS
7. Save and Restore PASS
8. Single domain with single vcpu bind a CPU PASS
9. one FC5 ia32e SMP Linux VMX domain with memory 1500M PASS
10. boot up two winXP per processor at the same time PASS
11. boot up one linux VMX with 4 vcpus PASS
12. boot up four linux VMX one by one PASS
13. one pae SMP Linux VMX domain with memory 1500M PASS
14. one pae SMP Linux VMX domain with memory 4G PASS
15. one pae SMP Linux VMX domain with memory 256M PASS
16. Boot up VMX with acpi=1, vcpu=1 PASS
17. subset LTP test in RHEL4U2 ia32e SMP VMX domain PASS
18. boot up the winxp image with vcpus=1 and acpi=1 PASS
19. boot up the vista image with vcpus=1 PASS
20. one IA32E UP ACPI Windows 2K3 VMX domain PASS
21. boot Linux 2.6.23 base kernel in ia32e SMP Linux VMX domain PASS
22. one IA32E SMP ACPI Windows 2K3 VMX domain PASS
23. one IA32E SMP ACPI Windows 2K VMX domain PASS
24. one IA32E UP ACPI Windows 2K VMX domain PASS
25. one IA32E SMP ACPI Windows XP VMX domain PASS
26. one IA32E UP NOACPI Windows 2K VMX domain PASS
27. one IA32E UP NOACPI Windows 2K3 VMX domain PASS
28. one IA32E UP NOACPI Windows XP VMX domain PASS
29. kernel build in one ia32elinux VMX PASS
30. VBD and VNIF works on UP VMX domain PASS
31. VBD and VNIF works on SMP VMX domain PASS
32. startx in dom0 PASS
33. boot up one IA32E RHEL5u1 VMX domain. PASS
34. reboot Windows xp after it boot up. PASS
35. reboot Fedora core 6 after it boot up. PASS
36. assign one pcie nic to one UP Linux guest with vtd. PASS
37. assign one pcie nic to one SMP Linux guest with vtd. PASS
38. assign one pcie nic to one UP WinXP guest with vtd. PASS
39. assign one pci nic to one SMP Linux guest with vtd. PASS
40. assign one pci nic to one UP WinXP guest with vtd. PASS
41. assign one pci nic to one UP Linux guest with vtd PASS
42. scp a big file in Linux guest via the pci nic assigned with vt-d. PASS
43. assign one pcie nic to one SMP WinXP guest with vtd. PASS
44. assign one pci nic to one SMP WinXP guest with vtd. PASS
45. scp a big file in Linux guest via the pcie nic assigned with vt-d.
PASS
46. RTC wakes up Dom0-S3 PASS
47. SMP-Dom0 CPU number does not change after S3 waking PASS
48. Dom0 Free memory info does not change much after S3 waking PASS
49. Dom0 continuously does S3 for 20 times PASS
50. Dom0 Timer check after S3 resuming PASS
Platform : x86_64
Service OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
Hardware : Clovertown
Xen package: 16720:d13c4d2836a8
Date: Wed Jan 16 11:26:52 CST 2008
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 10 0 0 0
device_model 2 2 0 0 0
control_panel 17 15 2 0 0
Restart 2 2 0 0 0
gtest 16 16 0 0 0
====================================================================vtd
10 10 0 0 0
:one_pcie_smp_xp_64_g64 1 1 0 0 0
:one_pci_smp_xp_64_g64 1 1 0 0 0
:one_pcie_up_xp_64_g64 1 1 0 0 0
:one_pcie_up_64_g64 1 1 0 0 0
:one_pcie_scp_64_g64 1 1 0 0 0
:one_pci_scp_64_g64 1 1 0 0 0
:one_pci_up_xp_64_g64 1 1 0 0 0
:one_pci_smp_64_g64 1 1 0 0 0
:one_pcie_smp_64_g64 1 1 0 0 0
:one_pci_up_64_g64 1 1 0 0 0
device_model 2 2 0 0 0
:pv_on_smp_64_g64 1 1 0 0 0
:pv_on_up_64_g64 1 1 0 0 0
control_panel 17 15 2 0 0
:XEN_1500M_guest_64_g64 1 1 0 0 0
:XEN_256M_xenu_64_gPAE 1 0 1 0 0
:XEN_vmx_4vcpu_64_g64 1 1 0 0 0
:XEN_1500M_guest_64_gPAE 1 1 0 0 0
:XEN_4G_guest_64_gPAE 1 1 0 0 0
:XEN_256M_guest_64_g64 1 1 0 0 0
:XEN_SR_64_g64 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_vmx_vcpu_pin_64_g64 1 1 0 0 0
:XEN_linux_win_64_g64 1 1 0 0 0
:XEN_256M_guest_64_gPAE 1 1 0 0 0
:XEN_LM_64_g64 1 1 0 0 0
:XEN_two_winxp_64_g64 1 1 0 0 0
:XEN_four_vmx_xenu_seq_6 1 0 1 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_4G_guest_64_g64 1 1 0 0 0
:XEN_four_dguest_co_64_g 1 1 0 0 0
Restart 2 2 0 0 0
:Guest64_64_gPAE 1 1 0 0 0
:GuestPAE_64_g64 1 1 0 0 0
gtest 16 16 0 0 0
:boot_up_acpi_win2k3_64_ 1 1 0 0 0
:boot_smp_acpi_xp_64_g64 1 1 0 0 0
:bootx_64_g64 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
:reboot_xp_64_g64 1 1 0 0 0
:boot_up_vista_64_g64 1 1 0 0 0
:boot_up_noacpi_win2k_64 1 1 0 0 0
:boot_base_kernel_64_g64 1 1 0 0 0
:boot_up_acpi_win2k_64_g 1 1 0 0 0
:reboot_fc6_64_g64 1 1 0 0 0
:boot_up_acpi_xp_64_g64 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:ltp_nightly_64_g64 1 1 0 0 0
:boot_rhel5u1_64_g64 1 1 0 0 0
:boot_up_acpi_64_g64 1 1 0 0 0
:kb_nightly_64_g64 1 1 0 0 0
====================================================================Total
47 45 2 0 0
VTD IA32PAE:
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 10 0 0 0
====================================================================vtd
10 10 0 0 0
:one_pci_smp_xp_PAE 1 1 0 0 0
:one_pci_scp_PAE 1 1 0 0 0
:one_pcie_up_PAE 1 1 0 0 0
:one_pci_up_PAE 1 1 0 0 0
:one_pcie_smp_xp_PAE 1 1 0 0 0
:one_pci_smp_PAE 1 1 0 0 0
:one_pcie_up_xp_PAE 1 1 0 0 0
:one_pcie_scp_PAE 1 1 0 0 0
:one_pci_up_xp_PAE 1 1 0 0 0
:one_pcie_smp_PAE 1 1 0 0 0
====================================================================Total
10 10 0 0 0
ACPI test result
IA32PAE:
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================acpi
5 4 0 0 1
Restart 1 1 0 0 0
====================================================================acpi
5 4 0 0 1
:do_S3_20times_PAE_gPAE 1 0 0 0 1
:timer_float_PAE_gPAE 1 1 0 0 0
:cpuinfo_change_PAE_gPAE 1 1 0 0 0
:meminfo_change_PAE_gPAE 1 1 0 0 0
:alarm_S3_wakeup_PAE_gPA 1 1 0 0 0
Restart 1 1 0 0 0
:GuestPAE_PAE_g64 1 1 0 0 0
====================================================================Total
6 5 0 0 1
IA32E:
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================acpi
5 5 0 0 0
Restart 1 1 0 0 0
====================================================================acpi
5 5 0 0 0
:meminfo_change_64_g64 1 1 0 0 0
:alarm_S3_wakeup_64_g64 1 1 0 0 0
:timer_float_64_g64 1 1 0 0 0
:do_S3_20times_64_g64 1 1 0 0 0
:cpuinfo_change_64_g64 1 1 0 0 0
Restart 1 1 0 0 0
:Guest64_64_g64 1 1 0 0 0
====================================================================Total
6 6 0 0 0
-- haicheng
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Steven Hand
2008-Jan-16 14:16 UTC
Re: [Xen-devel] VMX status report. Xen: #16720 & Xen0: #379 -- no new issue.
>2) HVM domain performance would downgrade, after doing save/restore.Is there any more information on this? I can''t find any record of it as being a ''new issue'' ever, just being an ''old issue'' for quite a while now. Is there a bugzilla entry? Or can you just give more details via email? cheers, S. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Li, Haicheng
2008-Jan-17 02:27 UTC
RE: [Xen-devel] VMX status report. Xen: #16720 & Xen0: #379 -- no new issue.
OK, to help us investigate it, I created a bugzilla entry for it, http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1143; And previously, there were some discussions on this issue, I attached them below. Steven Hand wrote:>> 2) HVM domain performance would downgrade, after doing save/restore. > > Is there any more information on this? I can''t find any record of it > as being a ''new issue'' ever, just being an ''old issue'' for quite a > while now. Is there a bugzilla entry? Or can you just give more > details via email?======================== For the virtual interrupt injection rate, there is almost no change across save/restore; The save/restore happens on the same machine. The downgrade happens under all combinations of 32pae/32e VMX guests on 32pae/32e Xen, and it should exist for quite a long period of time. -- Dexuan Keir Fraser wrote:> Does the interrupt rate change after save/restore? Perhaps we are > creating a timer-interrupt storm immediately after restore (but this > should subside quickly, or the performance drop should be much more > dramatic), or we have got the restored interrupt rate wrong somehow? > Is the save/restore on the same machine, or across machines? > > -- Keir > > On 30/12/07 11:29, "Li, Xin B" <xin.b.li@intel.com> wrote: > >> I saw 32bit Vista guest becomes very slow after a save/restore if >> using default timer_mode 0, but timer_mode 2 is OK. >> -Xin >> >>> -----Original Message----- >>> From: xen-devel-bounces@lists.xensource.com >>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Li, Xin >>> B Sent: 2007年12月30日 19:23 To: Keir Fraser; Cui, Dexuan; You, >>> Yongkang; xen-devel >>> Subject: RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 >>> -oneissuefixed >>> >>> will timer_mode impact guest performance after a save/restore? -Xin >>> >>>> -----Original Message----- >>>> From: xen-devel-bounces@lists.xensource.com >>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir >>>> Fraser Sent: 2007年12月30日 0:14 To: Cui, Dexuan; You, Yongkang; >>>> xen-devel >>>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>>> #372 -oneissue fixed >>>> >>>> Not really. I''d perhaps get Xen to dump information about all >>>> hvm_params and acceleration options and see if anythign has been >>>> dropped across the save/restore. >>>> >>>> -- Keir >>>> >>>> On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote: >>>> >>>>> On my host, for KernelBuild test, it may vary from 3% to 10%, but >>>>> 30% was also observed on others'' hosts. >>>>> >>>>> Any suggestion? >>>>> >>>>> Thanks >>>>> -- Dexuan >>>>> >>>>> -----Original Message----- >>>>> From: xen-devel-bounces@lists.xensource.com >>>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir >>>>> Fraser Sent: Saturday, December 29, 2007 5:20 PM >>>>> To: You, Yongkang; xen-devel >>>>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>>>> #372 - oneissue fixed >>>>> >>>>> On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: >>>>> >>>>>> 6) HVM domain performance would downgrade, after doing >>>>>> save/restore. >>>>> >>>>> How much does it downgrade by? >>>>> >>>>> -- Keir >>>>>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel -- haicheng _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel