You, Yongkang
2007-Dec-29 06:40 UTC
[Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - one issue fixed
Hi All,
One issue has been fixed in today''s nightly testing. 6 issues are still
opened in nightly report.
Fixed Issues:
=============================================1) Fail to boot smp Linux with VT-d
NIC assigned on IA32e platform
This issue only happened on special circumstance. Such as
physcial mem=1G and try to assign both dom0 and guest memory equals to
512M.
Old Issues:
=============================================1) [Installation] Can not install
32bit Fedora 7 with vcpu > 1
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1084
2) [Installation] Fedora8 IA32e guest installation failure.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1118
3) [Guest Test] SMP VISTA HVM guest might be blue screen when doing
large I/O in dom0
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1117
4) PV drivers can not build with RHEL5 kenrel.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1116
5) [Device Model] vnif can not coexist with 8139 nic.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=922
6) HVM domain performance would downgrade, after doing save/restore.
Testing environments:
=============================================PAE
CPU Xeon(r) Processor 7000 series
Dom0 OS : Fedora5
Memory size : 8G
IA32e
Dom0 OS : RHEL4u3
CPU : Xeon(r) processor 5300 series
Memory size : 8G
VTD
Service OS : RHEL5.1
Platform : Core 2 Due with vt-d supported
Dom0 S3
Service OS : RHEL5.1
Platform : Core 2 Due
Details: (Some cases report fail, but can pass when manually retest)
=============================================Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : paxville
Xen package: 16673:62c38443e9f7
Date: Sat Dec 29 09:18:30 CST 2007
1. 2 PAE SMP VMX domains and 2 xenU domains coexist FAIL
2. one PAE SMP Linux VMX domain with memory 4G PASS
3. Live migration PASS
4. boot a pae on x86_64 xenU PASS
5. boot 4 VMX per processor at the same time PASS
6. boot up 1 winXP VMX and 1 linux PAE VMX FAIL
7. Save and Restore PASS
8. Single domain with single vcpu bind a CPU PASS
9. one PAE SMP Linux VMX domain with memory 1500M PASS
10. boot up two winXP per processor at the same time PASS
11. boot up one linux VMX with 4 vcpus PASS
12. boot up four linux VMX one by one PASS
13. boot up VMX with acpi=1, vcpu=1 PASS
14. subset LTP test in RHEL4U2 PAE SMP VMX domain PASS
15. boot up the winxp image with vcpus=1 and acpi=1 PASS
16. boot up the vista image with vcpus=1 PASS
17. one IA32 UP ACPI Windows 2K3 VMX domain PASS
18. Boot Linux 2.6.23 base kernel in RHEL4U1 PAE SMP Linux VMX domain
PASS
19. one IA32 SMP ACPI Windows 2K3 VMX domain PASS
20. one IA32 SMP ACPI Windows 2K VMX domain PASS
21. one IA32 UP ACPI Windows 2K VMX domain PASS
22. one IA32 SMP ACPI Windows XP VMX domain PASS
23. one IA32 UP NOACPI Windows 2K VMX domain PASS
24. one IA32 UP NOACPI Windows 2K3 VMX domain PASS
25. one IA32 UP NOACPI Windows XP VMX domain PASS
26. kernel build in one linux VMX PASS
27. VBD and VNIF works on UP VMX domain PASS
28. VBD and VNIF works on SMP VMX domain FAIL
29. startx in dom0 PASS
30. boot up one IA32 RHEL5u1 VMX domain.
PASS
31. reboot Windows xp after it boot up.
PASS
32. reboot Fedora core 6 after it boot up.
PASS
33. assign one pcie nic to one UP Linux guest with vtd. PASS
34. assign one pcie nic to one SMP Linux guest with vtd. PASS
35. assign one pcie nic to one UP WinXP guest with vtd. PASS
36. assign one pci nic to one SMP Linux guest with vtd. PASS
37. assign one pci nic to one UP WinXP guest with vtd. PASS
38. assign one pci nic to one UP Linux guest with vtd PASS
39. scp a big file in Linux guest via the pci nic assigned with vt-d.
PASS
40. assign one pcie nic to one SMP WinXP guest with vtd.
PASS
41. assign one pci nic to one SMP WinXP guest with vtd.
PASS
42. scp a big file in Linux guest via the pcie nic assigned with vt-d.
PASS
Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : paxville
Xen package: 16673:62c38443e9f7
Date: Sat Dec 29 09:18:30 CST 2007
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================device_model
2 1 1 0 0
control_panel 12 10 2 0 0
Restart 1 1 0 0 0
gtest 18 18 0 0 0
====================================================================device_model
2 1 1 0 0
:pv_on_up_PAE_gPAE 1 1 0 0 0
:pv_on_smp_PAE_gPAE 1 0 1 0 0
control_panel 12 10 2 0 0
:XEN_4G_guest_PAE_gPAE 1 1 0 0 0
:XEN_four_vmx_xenu_seq_P 1 0 1 0 0
:XEN_LM_PAE_gPAE 1 1 0 0 0
:XEN_four_dguest_co_PAE_ 1 1 0 0 0
:XEN_linux_win_PAE_gPAE 1 0 1 0 0
:XEN_SR_PAE_gPAE 1 1 0 0 0
:XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0
:XEN_1500M_guest_PAE_gPA 1 1 0 0 0
:XEN_256M_guest_PAE_gPAE 1 1 0 0 0
:XEN_two_winxp_PAE_gPAE 1 1 0 0 0
:XEN_four_sguest_seq_PAE 1 1 0 0 0
:XEN_vmx_4vcpu_PAE_gPAE 1 1 0 0 0
Restart 1 1 0 0 0
:GuestPAE_PAE_gPAE 1 1 0 0 0
gtest 18 18 0 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_up_noacpi_win2k3_P 1 1 0 0 0
:boot_rhel5u1_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_noacpi_xp_PAE_g 1 1 0 0 0
:bootx_PAE_gPAE 1 1 0 0 0
:reboot_fc6_PAE_gPAE 1 1 0 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
====================================================================Total
33 30 3 0 0
Platform : x86_64
Service OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
Hardware : clovertown
Xen package: 16673:62c38443e9f7
Date: Sat Dec 29 08:20:03 CST 2007
1. 2 ia32e SMP VMX domains and 2 xenU domains coexist PASS
2. one ia32e SMP Linux VMX domain with memory 4G PASS
3. Live migration PASS
4. one xenU domain with memory 256M PASS
5. boot 4 VMX per processor at the same time PASS
6. boot up 1 winXP VMX and 1 linux VMX PASS
7. Save and Restore PASS
8. Single domain with single vcpu bind a CPU PASS
9. one FC5 ia32e SMP Linux VMX domain with memory 1500M PASS
10. boot up two winXP per processor at the same time PASS
11. boot up one linux VMX with 4 vcpus PASS
12. boot up four linux VMX one by one PASS
13. one pae SMP Linux VMX domain with memory 1500M PASS
14. one pae SMP Linux VMX domain with memory 4G PASS
15. one pae SMP Linux VMX domain with memory 256M PASS
16. Boot up VMX with acpi=1, vcpu=1 PASS
17. subset LTP test in RHEL4U2 ia32e SMP VMX domain PASS
18. boot up the winxp image with vcpus=1 and acpi=1 PASS
19. boot up the vista image with vcpus=1 PASS
20. one IA32E UP ACPI Windows 2K3 VMX domain PASS
21. boot Linux 2.6.23 base kernel in ia32e SMP Linux VMX domain PASS
22. one IA32E SMP ACPI Windows 2K3 VMX domain PASS
23. one IA32E SMP ACPI Windows 2K VMX domain PASS
24. one IA32E UP ACPI Windows 2K VMX domain PASS
25. one IA32E SMP ACPI Windows XP VMX domain PASS
26. one IA32E UP NOACPI Windows 2K VMX domain PASS
27. one IA32E UP NOACPI Windows 2K3 VMX domain PASS
28. one IA32E UP NOACPI Windows XP VMX domain PASS
29. kernel build in one ia32elinux VMX PASS
30. VBD and VNIF works on UP VMX domain PASS
31. VBD and VNIF works on SMP VMX domain PASS
32. startx in dom0 PASS
33. boot up one IA32E RHEL5u1 VMX domain.
PASS
34. reboot Windows xp after it boot up.
PASS
35. reboot Fedora core 6 after it boot up.
PASS
36. assign one pcie nic to one UP Linux guest with vtd. PASS
37. assign one pcie nic to one SMP Linux guest with vtd. PASS
38. assign one pcie nic to one UP WinXP guest with vtd. PASS
39. assign one pci nic to one SMP Linux guest with vtd. PASS
40. assign one pci nic to one UP WinXP guest with vtd. FAIL
41. assign one pci nic to one UP Linux guest with vtd PASS
42. scp a big file in Linux guest via the pci nic assigned with vt-d.
PASS
43. assign one pcie nic to one SMP WinXP guest with vtd.
PASS
44. assign one pci nic to one SMP WinXP guest with vtd.
PASS
45. scp a big file in Linux guest via the pcie nic assigned with vt-d.
PASS
Platform : x86_64
Service OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 3)
Hardware : clovertown
Xen package: 16673:62c38443e9f7
Date: Sat Dec 29 08:20:03 CST 2007
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 9 1 0 0
device_model 2 2 0 0 0
control_panel 17 17 0 0 0
Restart 2 2 0 0 0
gtest 16 16 0 0 0
====================================================================vtd
10 9 1 0 0
:one_pcie_smp_xp_64_g64 1 1 0 0 0
:one_pci_smp_xp_64_g64 1 1 0 0 0
:one_pcie_up_xp_64_g64 1 1 0 0 0
:one_pcie_up_64_g64 1 1 0 0 0
:one_pcie_scp_64_g64 1 1 0 0 0
:one_pci_scp_64_g64 1 1 0 0 0
:one_pci_up_xp_64_g64 1 0 1 0 0
:one_pci_smp_64_g64 1 1 0 0 0
:one_pcie_smp_64_g64 1 1 0 0 0
:one_pci_up_64_g64 1 1 0 0 0
device_model 2 2 0 0 0
:pv_on_smp_64_g64 1 1 0 0 0
:pv_on_up_64_g64 1 1 0 0 0
control_panel 17 17 0 0 0
:XEN_1500M_guest_64_g64 1 1 0 0 0
:XEN_256M_xenu_64_gPAE 1 1 0 0 0
:XEN_vmx_4vcpu_64_g64 1 1 0 0 0
:XEN_1500M_guest_64_gPAE 1 1 0 0 0
:XEN_4G_guest_64_gPAE 1 1 0 0 0
:XEN_256M_guest_64_g64 1 1 0 0 0
:XEN_SR_64_g64 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_vmx_vcpu_pin_64_g64 1 1 0 0 0
:XEN_linux_win_64_g64 1 1 0 0 0
:XEN_256M_guest_64_gPAE 1 1 0 0 0
:XEN_LM_64_g64 1 1 0 0 0
:XEN_two_winxp_64_g64 1 1 0 0 0
:XEN_four_vmx_xenu_seq_6 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_4G_guest_64_g64 1 1 0 0 0
:XEN_four_dguest_co_64_g 1 1 0 0 0
Restart 2 2 0 0 0
:Guest64_64_gPAE 1 1 0 0 0
:GuestPAE_64_g64 1 1 0 0 0
gtest 16 16 0 0 0
:boot_up_acpi_win2k3_64_ 1 1 0 0 0
:boot_smp_acpi_xp_64_g64 1 1 0 0 0
:bootx_64_g64 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
:reboot_xp_64_g64 1 1 0 0 0
:boot_up_vista_64_g64 1 1 0 0 0
:boot_up_noacpi_win2k_64 1 1 0 0 0
:boot_base_kernel_64_g64 1 1 0 0 0
:boot_up_acpi_win2k_64_g 1 1 0 0 0
:reboot_fc6_64_g64 1 1 0 0 0
:boot_up_acpi_xp_64_g64 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:ltp_nightly_64_g64 1 1 0 0 0
:boot_rhel5u1_64_g64 1 1 0 0 0
:boot_up_acpi_64_g64 1 1 0 0 0
:kb_nightly_64_g64 1 1 0 0 0
====================================================================Total
47 46 1 0 0
Vtd PAE:
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 10 0 0 0
====================================================================vtd
10 10 0 0 0
:one_pci_smp_xp_PAE 1 1 0 0 0
:one_pci_scp_PAE 1 1 0 0 0
:one_pcie_up_PAE 1 1 0 0 0
:one_pci_up_PAE 1 1 0 0 0
:one_pcie_smp_xp_PAE 1 1 0 0 0
:one_pci_smp_PAE 1 1 0 0 0
:one_pcie_up_xp_PAE 1 1 0 0 0
:one_pcie_scp_PAE 1 1 0 0 0
:one_pci_up_xp_PAE 1 1 0 0 0
:one_pcie_smp_PAE 1 1 0 0 0
====================================================================Total
10 10 0 0 0
Best Regards,
Yongkang You
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Keir Fraser
2007-Dec-29 09:20 UTC
Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - one issue fixed
On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote:> 6) HVM domain performance would downgrade, after doing save/restore.How much does it downgrade by? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Cui, Dexuan
2007-Dec-29 13:24 UTC
RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - oneissue fixed
On my host, for KernelBuild test, it may vary from 3% to 10%, but 30% was also observed on others'' hosts. Any suggestion? Thanks -- Dexuan -----Original Message----- From: xen-devel-bounces@lists.xensource.com [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser Sent: Saturday, December 29, 2007 5:20 PM To: You, Yongkang; xen-devel Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - oneissue fixed On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote:> 6) HVM domain performance would downgrade, after doing save/restore.How much does it downgrade by? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2007-Dec-29 16:13 UTC
Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - oneissue fixed
Not really. I''d perhaps get Xen to dump information about all hvm_params and acceleration options and see if anythign has been dropped across the save/restore. -- Keir On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote:> On my host, for KernelBuild test, it may vary from 3% to 10%, but 30% > was also observed on others'' hosts. > > Any suggestion? > > Thanks > -- Dexuan > > -----Original Message----- > From: xen-devel-bounces@lists.xensource.com > [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser > Sent: Saturday, December 29, 2007 5:20 PM > To: You, Yongkang; xen-devel > Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - > oneissue fixed > > On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: > >> 6) HVM domain performance would downgrade, after doing save/restore. > > How much does it downgrade by? > > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Li, Xin B
2007-Dec-30 11:23 UTC
RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 -oneissue fixed
will timer_mode impact guest performance after a save/restore? -Xin>-----Original Message----- >From: xen-devel-bounces@lists.xensource.com >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir Fraser >Sent: 2007年12月30日 0:14 >To: Cui, Dexuan; You, Yongkang; xen-devel >Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >#372 -oneissue fixed > >Not really. I''d perhaps get Xen to dump information about all >hvm_params and >acceleration options and see if anythign has been dropped across the >save/restore. > > -- Keir > >On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote: > >> On my host, for KernelBuild test, it may vary from 3% to 10%, but 30% >> was also observed on others'' hosts. >> >> Any suggestion? >> >> Thanks >> -- Dexuan >> >> -----Original Message----- >> From: xen-devel-bounces@lists.xensource.com >> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >Keir Fraser >> Sent: Saturday, December 29, 2007 5:20 PM >> To: You, Yongkang; xen-devel >> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 - >> oneissue fixed >> >> On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: >> >>> 6) HVM domain performance would downgrade, after doing save/restore. >> >> How much does it downgrade by? >> >> -- Keir >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Li, Xin B
2007-Dec-30 11:29 UTC
RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 -oneissuefixed
I saw 32bit Vista guest becomes very slow after a save/restore if using default timer_mode 0, but timer_mode 2 is OK. -Xin>-----Original Message----- >From: xen-devel-bounces@lists.xensource.com >[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Li, Xin B >Sent: 2007年12月30日 19:23 >To: Keir Fraser; Cui, Dexuan; You, Yongkang; xen-devel >Subject: RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >#372 -oneissuefixed > >will timer_mode impact guest performance after a save/restore? >-Xin > >>-----Original Message----- >>From: xen-devel-bounces@lists.xensource.com >>[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >Keir Fraser >>Sent: 2007年12月30日 0:14 >>To: Cui, Dexuan; You, Yongkang; xen-devel >>Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>#372 -oneissue fixed >> >>Not really. I''d perhaps get Xen to dump information about all >>hvm_params and >>acceleration options and see if anythign has been dropped across the >>save/restore. >> >> -- Keir >> >>On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote: >> >>> On my host, for KernelBuild test, it may vary from 3% to >10%, but 30% >>> was also observed on others'' hosts. >>> >>> Any suggestion? >>> >>> Thanks >>> -- Dexuan >>> >>> -----Original Message----- >>> From: xen-devel-bounces@lists.xensource.com >>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >>Keir Fraser >>> Sent: Saturday, December 29, 2007 5:20 PM >>> To: You, Yongkang; xen-devel >>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & >Xen0: #372 - >>> oneissue fixed >>> >>> On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: >>> >>>> 6) HVM domain performance would downgrade, after doing >save/restore. >>> >>> How much does it downgrade by? >>> >>> -- Keir >>> >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >> >> >> >>_______________________________________________ >>Xen-devel mailing list >>Xen-devel@lists.xensource.com >>http://lists.xensource.com/xen-devel >> > >_______________________________________________ >Xen-devel mailing list >Xen-devel@lists.xensource.com >http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Jan-06 22:34 UTC
Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 -oneissuefixed
Does the interrupt rate change after save/restore? Perhaps we are creating a timer-interrupt storm immediately after restore (but this should subside quickly, or the performance drop should be much more dramatic), or we have got the restored interrupt rate wrong somehow? Is the save/restore on the same machine, or across machines? -- Keir On 30/12/07 11:29, "Li, Xin B" <xin.b.li@intel.com> wrote:> I saw 32bit Vista guest becomes very slow after a save/restore if using > default timer_mode 0, but timer_mode 2 is OK. > -Xin > >> -----Original Message----- >> From: xen-devel-bounces@lists.xensource.com >> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Li, Xin B >> Sent: 2007年12月30日 19:23 >> To: Keir Fraser; Cui, Dexuan; You, Yongkang; xen-devel >> Subject: RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >> #372 -oneissuefixed >> >> will timer_mode impact guest performance after a save/restore? >> -Xin >> >>> -----Original Message----- >>> From: xen-devel-bounces@lists.xensource.com >>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >> Keir Fraser >>> Sent: 2007年12月30日 0:14 >>> To: Cui, Dexuan; You, Yongkang; xen-devel >>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>> #372 -oneissue fixed >>> >>> Not really. I''d perhaps get Xen to dump information about all >>> hvm_params and >>> acceleration options and see if anythign has been dropped across the >>> save/restore. >>> >>> -- Keir >>> >>> On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote: >>> >>>> On my host, for KernelBuild test, it may vary from 3% to >> 10%, but 30% >>>> was also observed on others'' hosts. >>>> >>>> Any suggestion? >>>> >>>> Thanks >>>> -- Dexuan >>>> >>>> -----Original Message----- >>>> From: xen-devel-bounces@lists.xensource.com >>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of >>> Keir Fraser >>>> Sent: Saturday, December 29, 2007 5:20 PM >>>> To: You, Yongkang; xen-devel >>>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & >> Xen0: #372 - >>>> oneissue fixed >>>> >>>> On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: >>>> >>>>> 6) HVM domain performance would downgrade, after doing >> save/restore. >>>> >>>> How much does it downgrade by? >>>> >>>> -- Keir >>>> >>>> >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>> >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >>> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Cui, Dexuan
2008-Jan-07 02:59 UTC
RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372-oneissuefixed
For the virtual interrupt injection rate, there is almost no change across save/restore; The save/restore happens on the same machine. The downgrade happens under all combinations of 32pae/32e VMX guests on 32pae/32e Xen, and it should exist for quite a long period of time. -- Dexuan Keir Fraser wrote:> Does the interrupt rate change after save/restore? Perhaps we are > creating a timer-interrupt storm immediately after restore (but this > should subside quickly, or the performance drop should be much more > dramatic), or we have got the restored interrupt rate wrong somehow? > Is the save/restore on the same machine, or across machines? > > -- Keir > > On 30/12/07 11:29, "Li, Xin B" <xin.b.li@intel.com> wrote: > >> I saw 32bit Vista guest becomes very slow after a save/restore if >> using default timer_mode 0, but timer_mode 2 is OK. >> -Xin >> >>> -----Original Message----- >>> From: xen-devel-bounces@lists.xensource.com >>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Li, Xin >>> B Sent: 2007年12月30日 19:23 To: Keir Fraser; Cui, Dexuan; You, >>> Yongkang; xen-devel >>> Subject: RE: [Xen-devel] VMX status report. Xen:#16673 & Xen0: #372 >>> -oneissuefixed >>> >>> will timer_mode impact guest performance after a save/restore? -Xin >>> >>>> -----Original Message----- >>>> From: xen-devel-bounces@lists.xensource.com >>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir >>>> Fraser Sent: 2007年12月30日 0:14 To: Cui, Dexuan; You, Yongkang; >>>> xen-devel >>>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>>> #372 -oneissue fixed >>>> >>>> Not really. I''d perhaps get Xen to dump information about all >>>> hvm_params and acceleration options and see if anythign has been >>>> dropped across the save/restore. >>>> >>>> -- Keir >>>> >>>> On 29/12/07 13:24, "Cui, Dexuan" <dexuan.cui@intel.com> wrote: >>>> >>>>> On my host, for KernelBuild test, it may vary from 3% to 10%, but >>>>> 30% was also observed on others'' hosts. >>>>> >>>>> Any suggestion? >>>>> >>>>> Thanks >>>>> -- Dexuan >>>>> >>>>> -----Original Message----- >>>>> From: xen-devel-bounces@lists.xensource.com >>>>> [mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Keir >>>>> Fraser Sent: Saturday, December 29, 2007 5:20 PM >>>>> To: You, Yongkang; xen-devel >>>>> Subject: Re: [Xen-devel] VMX status report. Xen:#16673 & Xen0: >>>>> #372 - oneissue fixed >>>>> >>>>> On 29/12/07 06:40, "You, Yongkang" <yongkang.you@intel.com> wrote: >>>>> >>>>>> 6) HVM domain performance would downgrade, after doing >>>>>> save/restore. >>>>> >>>>> How much does it downgrade by? >>>>> >>>>> -- Keir >>>>>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel