Li, Haicheng
2008-Mar-28 07:23 UTC
[Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
Hi all,
This is today''s nightly testing report; no new issue today. Most of
case
failures are due to bug #1194 listed below.
Old issues:
=============================================1. Hvm windows guest shows abnormal
color.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1180
2. [Guest Test] SMP VISTA HVM guest might be blue screen when doing
large I/O in dom0.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1117
3. XenU guest will hang if booted just after destorying a HVM guest.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1139
4. PV drivers broken again: module xen-vnif.ko inserting failed.
5. HVM can not boot up with OpenGL enabled if GLX support does not work.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1194
Testing Environments:
=============================================PAE
CPU : Xeon(r) processor 5300 series
Dom0 OS : FC6
Memory size : 8G
IA32e
CPU : Xeon(r) processor 5300 series
Dom0 OS : RHEL5
Memory size : 8G
Details:
=============================================Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : Clovertown
Xen package: 17304:ed67f68ae2a7
Date: Fri Mar 28 13:12:00 CST 2008
1. 2 PAE SMP VMX domains and 2 xenU domains coexist FAIL
2. Live and migration PASS
3. Save and Restore PASS
4. one PAE SMP Linux VMX domain with memory 256M PASS
5. one PAE SMP Linux VMX domain with memory 1500M PASS
6. one PAE SMP Linux VMX domain with memory 4G PASS
7. boot 4 VMX per processor at the same time PASS
8. boot up 1 winXP VMX and 1 linux PAE VMX PASS
9. Single domain with single vcpu bind a CPU PASS
10. boot up two winXP per processor at the same time PASS
11. boot up one linux VMX with 4 vcpus PASS
12. boot up four linux VMX one by one PASS
13. boot up VMX with acpi=1, vcpu=1 PASS
14. subset LTP test in RHEL4U2 PAE SMP VMX domain PASS
15. Boot Linux 2.6.23 base kernel in RHEL4U1 PAE SMP Linux VMX domain
PASS
16. one IA32 UP ACPI Windows 2K VMX domain FAIL
17. one IA32 UP ACPI Windows 2K3 VMX domain PASS
18. one IA32 UP ACPI Windows XP VMX domain PASS
19. one IA32 UP ACPI Vista VMX domain
FAIL
20. one IA32 SMP ACPI Windows 2K VMX domain FAIL
21. one IA32 SMP ACPI Windows 2K3 VMX domain PASS
22. one IA32 SMP ACPI Windows XP VMX domain PASS
23. one IA32 SMP ACPI Vista VMX domain
FAIL
24. one IA32 UP NOACPI Windows 2K VMX domain FAIL
25. one IA32 UP NOACPI Windows 2K3 VMX domain PASS
26. one IA32 UP NOACPI Windows XP VMX domain PASS
27. kernel build in one linux VMX PASS
28. startx in dom0 PASS
29. boot up one IA32 RHEL5u1 VMX domain. PASS
30. boot up one IA32 Fedora 7 VMX domain FAIL
31. boot up one IA32 Fedora 8 VMX domain FAIL
32. reboot Windows xp after it boot up. PASS
33. reboot Fedora core 6 after it boot up. PASS
34. VBD and VNIF works on UP VMX domain
NORESULT
35. VBD and VNIF works on SMP VMX domain
NORESULT
36. assign one pcie nic to one UP Linux guest with vtd. PASS
37. assign one pcie nic to one SMP Linux guest with vtd. PASS
38. assign one pcie nic to one UP WinXP guest with vtd. FAIL
39. assign one pci nic to one SMP Linux guest with vtd. PASS
40. assign one pci nic to one UP WinXP guest with vtd. FAIL
41. assign one pci nic to one UP Linux guest with vtd PASS
42. scp a big file in Linux guest via the pci nic assigned with vt-d.
PASS
43. assign one pcie nic to one SMP WinXP guest with vtd.
FAIL
44. assign one pci nic to one SMP WinXP guest with vtd.
FAIL
45. scp a big file in Linux guest via the pcie nic assigned with vt-d.
PASS
Platform : PAE
Service OS : Fedora Core release 6 (Zod)
Hardware : Clovertown
Xen package: 17304:ed67f68ae2a7
Date: Fri Mar 28 13:12:00 CST 2008
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 6 4 0 0
device_model 2 0 0 2 0
control_panel 12 11 1 0 0
Restart 1 1 0 0 0
gtest 21 15 6 0 0
====================================================================vtd
10 6 4 0 0
:one_pcie_scp_PAE_gPAE 1 1 0 0 0
:one_pci_smp_xp_PAE_gPAE 1 0 1 0 0
:one_pcie_up_xp_PAE_gPAE 1 0 1 0 0
:one_pci_smp_PAE_gPAE 1 1 0 0 0
:one_pci_up_xp_PAE_gPAE 1 0 1 0 0
:one_pcie_smp_PAE_gPAE 1 1 0 0 0
:one_pcie_smp_xp_PAE_gPA 1 0 1 0 0
:one_pci_scp_PAE_gPAE 1 1 0 0 0
:one_pcie_up_PAE_gPAE 1 1 0 0 0
:one_pci_up_PAE_gPAE 1 1 0 0 0
device_model 2 0 0 2 0
:pv_on_up_PAE_gPAE 1 0 0 1 0
:pv_on_smp_PAE_gPAE 1 0 0 1 0
control_panel 12 11 1 0 0
:XEN_4G_guest_PAE_gPAE 1 1 0 0 0
:XEN_four_vmx_xenu_seq_P 1 0 1 0 0
:XEN_LM_PAE_gPAE 1 1 0 0 0
:XEN_four_dguest_co_PAE_ 1 1 0 0 0
:XEN_linux_win_PAE_gPAE 1 1 0 0 0
:XEN_SR_PAE_gPAE 1 1 0 0 0
:XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0
:XEN_two_winxp_PAE_gPAE 1 1 0 0 0
:XEN_256M_guest_PAE_gPAE 1 1 0 0 0
:XEN_1500M_guest_PAE_gPA 1 1 0 0 0
:XEN_vmx_2vcpu_PAE_gPAE 1 1 0 0 0
:XEN_four_sguest_seq_PAE 1 1 0 0 0
Restart 1 1 0 0 0
:GuestPAE_PAE_gPAE 1 1 0 0 0
gtest 21 15 6 0 0
:boot_fc7_PAE_gPAE 1 0 1 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 0 1 0 0
:boot_up_vista_PAE_gPAE 1 0 1 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_smp_vista_PAE_gPAE 1 0 1 0 0
:boot_up_noacpi_win2k3_P 1 1 0 0 0
:boot_rhel5u1_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_noacpi_xp_PAE_g 1 1 0 0 0
:bootx_PAE_gPAE 1 0 1 0 0
:reboot_fc6_PAE_gPAE 1 1 0 0 0
:boot_fc8_PAE_gPAE 1 0 1 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
====================================================================Total
46 33 11 2 0
Platform : x86_64
Service OS : Red Hat Enterprise Linux Server release 5 (Tikanga)
Hardware : Clovertown
Xen package: 17304:ed67f68ae2a7
Date: Fri Mar 28 10:15:03 CST 2008
1. boot up four PAE linux VMX one by one
PASS
2. one PAE xenU domain with memory 256M PASS
3. one PAE SMP Linux VMX domain with memory 1500M PASS
4. one PAE SMP Linux VMX domain with memory 4G PASS
5. one PAE SMP Linux VMX domain with memory 256M PASS
6. 2 ia32E SMP VMX domains and 2 xenU domains coexist FAIL
7. Live and migration PASS
8. Save and Restore PASS
9. one ia32E SMP Linux VMX domain with memory 1500M PASS
10. one ia32E SMP Linux VMX domain with memory 4G
PASS
11. one ia32E SMP Linux VMX domain with memory 256M PASS
12. boot 4 VMX per processor at the same time PASS
13. boot up 1 winXP VMX and 1 linux VMX FAIL
14. Single domain with single vcpu bind a CPU PASS
15. boot up two winXP per processor at the same time FAIL
16. boot up one linux VMX with 4 vcpus PASS
17. boot up four linux VMX one by one PASS
18. Boot up VMX with acpi=1, vcpu=1 PASS
19. subset LTP test in RHEL4U2 ia32E SMP VMX domain PASS
20. one IA32E UP ACPI Windows 2K3 VMX domain FAIL
21. one IA32E UP ACPI Windows 2K VMX domain FAIL
22. one IA32E UP ACPI Windows XP VMX domain FAIL
23. one IA32E UP ACPI Vista VMX domain
FAIL
24. one IA32E SMP ACPI Windows 2K3 VMX domain FAIL
25. one IA32E SMP ACPI Windows 2K VMX domain FAIL
26. one IA32E SMP ACPI Windows XP VMX domain FAIL
27. one IA32E SMP ACPI Vista VMX domain
FAIL
28. one IA32E UP NOACPI Windows 2K VMX domain FAIL
29. boot Linux 2.6.23 base kernel in ia32E SMP Linux VMX domain PASS
30. kernel build in one ia32E linux VMX PASS
31. startx in dom0 FAIL
32. boot up one IA32E RHEL5u1 VMX domain. PASS
33. boot up one IA32E Fedora 7 VMX domain FAIL
34. boot up one IA32E Fedora 8 VMX domain FAIL
35. reboot Windows xp after it boot up. FAIL
36. reboot Fedora core 6 after it boot up. PASS
37. VBD and VNIF works on UP VMX domain
NORESULT
38. VBD and VNIF works on SMP VMX domain
NORESULT
39. assign one pcie nic to one UP Linux guest with vtd. PASS
40. assign one pcie nic to one SMP Linux guest with vtd. PASS
41. assign one pcie nic to one UP WinXP guest with vtd. FAIL
42. assign one pci nic to one SMP Linux guest with vtd. PASS
43. assign one pci nic to one UP WinXP guest with vtd. FAIL
44. assign one pci nic to one UP Linux guest with vtd PASS
45. scp a big file in Linux guest via the pci nic assigned with vt-d.
PASS
46. assign one pcie nic to one SMP WinXP guest with vtd.
FAIL
47. assign one pci nic to one SMP WinXP guest with vtd.
FAIL
48. scp a big file in Linux guest via the pcie nic assigned with vt-d.
PASS
Platform : x86_64
Service OS : Red Hat Enterprise Linux Server release 5 (Tikanga)
Hardware : Clovertown
Xen package: 17304:ed67f68ae2a7
Date: Fri Mar 28 10:15:03 CST 2008
Summary Test Report of Last Session
====================================================================
Total Pass Fail NoResult Crash
====================================================================vtd
10 6 4 0 0
device_model 2 0 0 2 0
control_panel 17 15 2 0 0
Restart 2 2 0 0 0
gtest 19 6 13 0 0
====================================================================vtd
10 6 4 0 0
:one_pcie_smp_xp_64_g64 1 0 1 0 0
:one_pci_smp_xp_64_g64 1 0 1 0 0
:one_pcie_up_xp_64_g64 1 0 1 0 0
:one_pcie_up_64_g64 1 1 0 0 0
:one_pcie_scp_64_g64 1 1 0 0 0
:one_pci_scp_64_g64 1 1 0 0 0
:one_pci_up_xp_64_g64 1 0 1 0 0
:one_pci_smp_64_g64 1 1 0 0 0
:one_pcie_smp_64_g64 1 1 0 0 0
:one_pci_up_64_g64 1 1 0 0 0
device_model 2 0 0 2 0
:pv_on_smp_64_g64 1 0 0 1 0
:pv_on_up_64_g64 1 0 0 1 0
control_panel 17 15 2 0 0
:XEN_1500M_guest_64_g64 1 1 0 0 0
:XEN_256M_xenu_64_gPAE 1 1 0 0 0
:XEN_1500M_guest_64_gPAE 1 1 0 0 0
:XEN_4G_guest_64_gPAE 1 1 0 0 0
:XEN_256M_guest_64_g64 1 1 0 0 0
:XEN_SR_64_g64 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_vmx_2vcpu_64_g64 1 1 0 0 0
:XEN_vmx_vcpu_pin_64_g64 1 1 0 0 0
:XEN_linux_win_64_g64 1 0 1 0 0
:XEN_256M_guest_64_gPAE 1 1 0 0 0
:XEN_LM_64_g64 1 1 0 0 0
:XEN_two_winxp_64_g64 1 0 1 0 0
:XEN_four_vmx_xenu_seq_6 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_4G_guest_64_g64 1 1 0 0 0
:XEN_four_dguest_co_64_g 1 1 0 0 0
Restart 2 2 0 0 0
:Guest64_64_gPAE 1 1 0 0 0
:GuestPAE_64_g64 1 1 0 0 0
gtest 19 6 13 0 0
:boot_smp_acpi_xp_64_g64 1 0 1 0 0
:boot_fc7_64_g64 1 0 1 0 0
:boot_base_kernel_64_g64 1 1 0 0 0
:ltp_nightly_64_g64 1 1 0 0 0
:boot_up_acpi_64_g64 1 1 0 0 0
:boot_up_acpi_win2k3_64_ 1 0 1 0 0
:boot_fc8_64_g64 1 0 1 0 0
:bootx_64_g64 1 0 1 0 0
:reboot_xp_64_g64 1 0 1 0 0
:boot_smp_acpi_win2k_64_ 1 0 1 0 0
:boot_up_vista_64_g64 1 0 1 0 0
:boot_up_noacpi_win2k_64 1 0 1 0 0
:boot_smp_vista_64_g64 1 0 1 0 0
:boot_up_acpi_win2k_64_g 1 0 1 0 0
:reboot_fc6_64_g64 1 1 0 0 0
:boot_up_acpi_xp_64_g64 1 0 1 0 0
:boot_smp_acpi_win2k3_64 1 0 1 0 0
:boot_rhel5u1_64_g64 1 1 0 0 0
:kb_nightly_64_g64 1 1 0 0 0
====================================================================Total
50 29 19 2 0
-- haicheng
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-28 07:26 UTC
Re: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
On 28/3/08 07:23, "Li, Haicheng" <haicheng.li@intel.com> wrote:> 4. PV drivers broken again: module xen-vnif.ko inserting failed.Is there a ticket for this issue? Or just post the error printed on insertion failure. I can probably work out a fix from that. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Li, Haicheng
2008-Mar-28 13:37 UTC
RE: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
Keir Fraser wrote:> On 28/3/08 07:23, "Li, Haicheng" <haicheng.li@intel.com> wrote: > >> 4. PV drivers broken again: module xen-vnif.ko inserting failed. > > Is there a ticket for this issue? Or just post the error printed on > insertion failure. I can probably work out a fix from that. > > -- KeirHi Keir, I rechecked this issue on c/s 17304, and found it had been fixed; I recalled we found this issue on c/s 17058, "insmod xen-vnif.ko" shew one unknown symbol on that c/s. However, I found another two issues on c/s 17304 when I tested pv driver: 1. Hvm guest with generic kernel 2.6.18 cannot boot up, the serial log shows "io.c:198:d1 MMIO emulation failed @ 0060:c01af276: 0f 6f 06 0f 6f 4e". http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1196. 2. on RHEL5.1, "insmod balloon.ko" shows error as below: Xen version 3.3. Hypercall area is 1 pages. xen_mem: Initialising balloon driver. kobject_add failed for memory with -EEXIST, don''t try to register things with the same name in the same directory. Call Trace: [<ffffffff80141148>] kobject_add+0x16e/0x199 [<ffffffff8846fa43>] :xen_balloon:balloon_sysfs_init+0x12/0xbb [<ffffffff8810d0cb>] :xen_balloon:balloon_init+0xcb/0xf1 [<ffffffff800a29da>] sys_init_module+0x16a6/0x1857 [<ffffffff8000c1a5>] _atomic_dec_and_lock+0x39/0x57 [<ffffffff8005b28d>] tracesys+0xd5/0xe0 http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1197. -- haicheng _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-28 14:02 UTC
Re: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
On 28/3/08 13:37, "Li, Haicheng" <haicheng.li@intel.com> wrote:> 1. Hvm guest with generic kernel 2.6.18 cannot boot up, the serial log > shows "io.c:198:d1 MMIO emulation failed @ 0060:c01af276: 0f 6f 06 0f 6f > 4e". http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1196.Can you provide the vmlinux file for this kernel, or otherwise find out what code is at address 0xc01af276 in that kernel. Either we are emulating something we should not emulate, or the kernel really is accessing I/O memory with SSE instructions, and we''ll need to emulate them.> 2. on RHEL5.1, "insmod balloon.ko" shows error as below: > > Xen version 3.3. > Hypercall area is 1 pages. > xen_mem: Initialising balloon driver. > kobject_add failed for memory with -EEXIST, don''t try to register things > with the same name in the same directory.Does RHEL5.1 already have a directory /sys/devices/system/memory/? If so what is under there? Could you ''ls -lR'' that directory and send us the output? I don''t think anyone actually uses our balloon driver sysfs interface, so we could just rename it to something a bit more Xen-specific. Thanks, Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Li, Haicheng
2008-Mar-31 07:44 UTC
RE: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
Keir Fraser wrote:> On 28/3/08 13:37, "Li, Haicheng" <haicheng.li@intel.com> wrote: > >> 1. Hvm guest with generic kernel 2.6.18 cannot boot up, the serial >> log shows "io.c:198:d1 MMIO emulation failed @ 0060:c01af276: 0f 6f >> 06 0f 6f 4e". >> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1196. > > Can you provide the vmlinux file for this kernel, or otherwise find > out what code is at address 0xc01af276 in that kernel. Either we are > emulating something we should not emulate, or the kernel really is > accessing I/O memory with SSE instructions, and we''ll need to emulate > them.Should be located to _mmx_memcpy(), `Objdump -d vmlinux` shows: c01af26b: 89 ef mov %ebp,%edi c01af26d: eb 4e jmp c01af2bd <_mmx_memcpy+0xad> c01af26f: 0f 0d 86 40 01 00 00 prefetch 0x140(%esi) c01af276: 0f 6f 06 movq (%esi),%mm0 c01af279: 0f 6f 4e 08 movq 0x8(%esi),%mm1 c01af27d: 0f 6f 56 10 movq 0x10(%esi),%mm2 c01af281: 0f 6f 5e 18 movq 0x18(%esi),%mm3 c01af285: 0f 7f 07 movq %mm0,(%edi)>> 2. on RHEL5.1, "insmod balloon.ko" shows error as below: >> >> Xen version 3.3. >> Hypercall area is 1 pages. >> xen_mem: Initialising balloon driver. >> kobject_add failed for memory with -EEXIST, don''t try to register >> things with the same name in the same directory. > > Does RHEL5.1 already have a directory /sys/devices/system/memory/? If > so what is under there? Could you ''ls -lR'' that directory and send us > the output? > > I don''t think anyone actually uses our balloon driver sysfs > interface, so we could just rename it to something a bit more > Xen-specific. >Yes there is alreay a directory /sys/devices/system/memory/. `ls -lR` shows : /sys/devices/system/memory/: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 block_size_bytes drwxr-xr-x 2 root root 0 Mar 31 15:34 memory0 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory1 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory2 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory3 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory4 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory5 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory6 drwxr-xr-x 2 root root 0 Mar 31 15:34 memory7 -rwx------ 1 root root 4096 Mar 31 15:34 probe /sys/devices/system/memory/memory0: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory1: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory2: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory3: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory4: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory5: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory6: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state /sys/devices/system/memory/memory7: total 0 -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_device -r--r--r-- 1 root root 4096 Mar 31 15:34 phys_index -rw-r--r-- 1 root root 4096 Mar 31 15:34 state -- haicheng _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-31 08:00 UTC
Re: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
On 31/3/08 08:44, "Li, Haicheng" <haicheng.li@intel.com> wrote:>> Can you provide the vmlinux file for this kernel, or otherwise find >> out what code is at address 0xc01af276 in that kernel. Either we are >> emulating something we should not emulate, or the kernel really is >> accessing I/O memory with SSE instructions, and we''ll need to emulate >> them. > > Should be located to _mmx_memcpy(),How early during boot does it crash? If it''s early perhaps I can repro if you send me the vmlinux file, or attach it to the bug, or otherwise make it available for download?>> Does RHEL5.1 already have a directory /sys/devices/system/memory/? If >> so what is under there? Could you ''ls -lR'' that directory and send us >> the output?> Yes there is alreay a directory /sys/devices/system/memory/.Okay, it''s generic Linux memory hotplug info. So I''ve renamed our sysfs information to /sys/devices/system/xen_memory/ -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-Mar-31 09:18 UTC
Re: [Xen-devel] VMX status report. Xen: #17304 & Xen0: #496 -- no new issue
On 31/3/08 09:57, "Li, Haicheng" <haicheng.li@intel.com> wrote:>> How early during boot does it crash? If it''s early perhaps I can >> repro if you send me the vmlinux file, or attach it to the bug, or >> otherwise make it available for download? > > Since it is too large to be added to bugzilla, vmlinux is attached here. > It crashes very early in booting, just after uncompressing kernel.It''s rather big for a ML posting too. It looks like the list server ate the whole email, which is probably just as well. I received the attachment to my own email address okay though, and I''ll try it out shortly. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel