rcgneo@us.ltcfwd.linux.ibm.com wrote on 02/14/2006 11:51:29 AM:> changeset: 8830:fcc833cbaf82 > tag: tip > user: kaf24@firebug.cl.cam.ac.uk > date: Mon Feb 13 10:41:23 2006 +0100 > summary: Return real error code from Xen /dev/mem, not EAGAIN. > > x460: > > x86_32: > > Status: > > - dom0 boots fine > - xend loads fine > - single HVM domain loads fine > - Multiple HVM domains load fine > - destruction of any HVM domain causes dom0 to reboot > > Issues affecting HVM: > > * (same issue) During xm-test, dom0 reboots. This happens during > "11_create_concurrent_pos" test case. The destroy call is causing the > reboot.I am going to look at this now....Will give you a debug hypervisor to get more info very soon. Thanks. Regards, Khoa _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
changeset: 8830:fcc833cbaf82 tag: tip user: kaf24@firebug.cl.cam.ac.uk date: Mon Feb 13 10:41:23 2006 +0100 summary: Return real error code from Xen /dev/mem, not EAGAIN. x460: x86_32: Status: - dom0 boots fine - xend loads fine - single HVM domain loads fine - Multiple HVM domains load fine - destruction of any HVM domain causes dom0 to reboot Issues affecting HVM: * (same issue) During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot. Details: dom0 will also reboot with the following console messages: (XEN) HVM_PIT: guest freq in cycles=3002234 (XEN) CPU: -14688196 (XEN) EI N þÅÿì1ÿN(XEN) CPU: 12 (XEN) EIP: e008:[<ff117988>]CPU: 9 (XEN) EIP: e008:[<ff111584>]CPU: 4 (XEN) EIP: e008:[<ff111584>] timer_softirq_action+0x64/0x140 (XEN) EFLAGS: 00010006 CONTEXT: hypervisor (XEN) eax: 0dee319b ebx: ffff4d85 ecx: 00000000 edx: 000000c0 (XEN) esi: ff1e9a00 edi: 00000480 ebp: ffbd2080 esp: ffbd1f68 (XEN) cr0: 8005003b cr3: 00178000 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008 (XEN) Xen stack trace from esp=ffbd1f68: (XEN) idle_loop+0x38/0x80 (XEN) EFLAGS: 00010246 (XEN) CR3: 00000000 (XEN) eax: 00000600 ebx: ffbc5fb4 ecx: 00000000 edx: 00000600 (XEN) esi: 00000600 edi: 00000600 ebp: 00000000 esp: ffbc5fa8 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 (XEN) ************************************ (XEN) CPU12 DOUBLE FAULT -- system shutdown (XEN) 0ded646e System needs manual reset. (XEN) ************************************ (XEN) timer_softirq_action+0x64/0x140 (XEN) EFLAGS: 00010006 CONTEXT: hypervisor (XEN) eax: 0e162278 ebx: 00010293 ecx: 00000000 edx: 000000c0 (XEN) esi: ff1e7900 edi: 00000200 ebp: 00000000 esp: ffbe5f68 (XEN) cr0: 8005003b cr3: 00178000 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008 (XEN) Xen stack trace from esp=ffbe5f68: (XEN) 0e155462 000000c0 00000100 00000200 ffbe5f7c 00000200 00000004 00000200 (XEN) 00000200 00000200 00000000 ff110772 00ef0000 ff117988 ffbe5fb4 ff1179c6 (XEN) ffbe6080 ff19ffa0 ff1ef880 00000000 00000000 00000000 00000000 00000000 (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 (XEN) 00000000 00000000 00000000 00000000 00000004 ffbe6080 (XEN) Xen call trace: (XEN) [<ff111584>] timer_softirq_action+0x64/0x140 (XEN) [<ff110772>]000000c0 00000000 00000480 do_softirq+0x32/0x50 (XEN) [<ff117988>]ffbd1f7c 00000480 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
changeset: 8843:765b0657264d tag: tip user: cl349@firebug.cl.cam.ac.uk date: Wed Feb 15 08:13:10 2006 +0000 summary: Cleanup x86/x86_64 apic.c files. SAME ISSUE AS BEFORE x460: x86_32: Status: - dom0 boots fine - xend loads fine - single HVM domain loads fine - Multiple HVM domains load fine - destruction of any HVM domain causes dom0 to reboot Issues affecting HVM: * During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot. Details: dom0 will also reboot with the following console messages: (XEN) HVM_PIT: guest freq in cycles=3002234 (XEN) CPU: -14688196 (XEN) EI N þÅÿì1ÿN(XEN) CPU: 12 (XEN) EIP: e008:[<ff117988>]CPU: 9 (XEN) EIP: e008:[<ff111584>]CPU: 4 (XEN) EIP: e008:[<ff111584>] timer_softirq_action+0x64/0x140 (XEN) EFLAGS: 00010006 CONTEXT: hypervisor (XEN) eax: 0dee319b ebx: ffff4d85 ecx: 00000000 edx: 000000c0 (XEN) esi: ff1e9a00 edi: 00000480 ebp: ffbd2080 esp: ffbd1f68 (XEN) cr0: 8005003b cr3: 00178000 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008 (XEN) Xen stack trace from esp=ffbd1f68: (XEN) idle_loop+0x38/0x80 (XEN) EFLAGS: 00010246 (XEN) CR3: 00000000 (XEN) eax: 00000600 ebx: ffbc5fb4 ecx: 00000000 edx: 00000600 (XEN) esi: 00000600 edi: 00000600 ebp: 00000000 esp: ffbc5fa8 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 (XEN) ************************************ (XEN) CPU12 DOUBLE FAULT -- system shutdown (XEN) 0ded646e System needs manual reset. (XEN) ************************************ (XEN) timer_softirq_action+0x64/0x140 (XEN) EFLAGS: 00010006 CONTEXT: hypervisor (XEN) eax: 0e162278 ebx: 00010293 ecx: 00000000 edx: 000000c0 (XEN) esi: ff1e7900 edi: 00000200 ebp: 00000000 esp: ffbe5f68 (XEN) cr0: 8005003b cr3: 00178000 (XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008 (XEN) Xen stack trace from esp=ffbe5f68: (XEN) 0e155462 000000c0 00000100 00000200 ffbe5f7c 00000200 00000004 00000200 (XEN) 00000200 00000200 00000000 ff110772 00ef0000 ff117988 ffbe5fb4 ff1179c6 (XEN) ffbe6080 ff19ffa0 ff1ef880 00000000 00000000 00000000 00000000 00000000 (XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 (XEN) 00000000 00000000 00000000 00000000 00000004 ffbe6080 (XEN) Xen call trace: (XEN) [<ff111584>] timer_softirq_action+0x64/0x140 (XEN) [<ff110772>]000000c0 00000000 00000480 do_softirq+0x32/0x50 (XEN) [<ff117988>]ffbd1f7c 00000480>>changeset: 8824:4caca2046421 >>tag: tip >>user: kaf24@firebug.cl.cam.ac.uk >>date: Mon Feb 13 03:23:26 2006 +0100 >>summary: Fix error exit path in __gnttab_map_grant_ref() to >> >> >>x460: >> >>x86_32: >> >>Status: >> >>- dom0 boots fine >>- xend loads fine >>- single HVM domain loads fine >>- Multiple HVM domains load fine >>- destruction of any HVM domain causes dom0 to reboot >> >>Issues affecting HVM: >> >>* During xm-test, dom0 reboots. This happens during >>"11_create_concurrent_pos" test case. The destroy call is causing the >>reboot. >> >>Details: >> >>== Last entries of xm-test .output file: =>> >>Console executing: [''/usr/sbin/xm'', ''xm'', ''console'', ''11_create_5''] >>[11_create_5] Sending `foo'' >>[11_create_5] Sending `ls'' >>[11_create_5] Sending `echo $?'' >>[5] Started 11_create_5 >>[dom0] Running `xm create /tmp/xm-test.conf'' >>Using config file "/tmp/xm-test.conf". >>Started domain 11_create_6 >>[dom0] Waiting 20 seconds for domU boot... >>Console executing: [''/usr/sbin/xm'', ''xm'', ''console'', ''11_create_6''] >>[11_create_6] Sending `foo'' >>[11_create_6] Sending `ls'' >>[11_create_6] Sending `echo $?'' >>[6] Started 11_create_6 >>[dom0] Running `xm create /tmp/xm-test.conf'' >>Using config file "/tmp/xm-test.conf". >>Started domain 11_create_7 >>[dom0] Waiting 20 seconds for domU boot... >>Console executing: [''/usr/sbin/xm'', ''xm'', >>''consolvmxdom2:/tmp/xm-test-results/021306-vmxdom2 >> >>== HVM domain output during before crash: =>> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: Call Trace: >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c010810a>] show_stack_log_lvl+0xaa/0xe0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01082f1>] show_registers+0x161/0x1e0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01084e9>] die+0xd9/0x180 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0108619>] do_trap+0x89/0xd0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0108998>] do_invalid_op+0xb8/0xd0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0107d67>] error_code+0x2b/0x30 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0147390>] zap_pte_range+0x1b0/0x310 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01475d9>] unmap_page_range+0xe9/0x1b0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c014777a>] unmap_vmas+0xda/0x1a0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c014cbde>] exit_mmap+0x6e/0xf0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0119707>] mmput+0x27/0x80 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011d45b>] exit_mm+0x6b/0xe0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011dc79>] do_exit+0xe9/0x380 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011df86>] do_group_exit+0x36/0x90 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c01277a9>] get_signal_to_deliver+0x269/0x2f0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c01079ab>] do_signal+0x6b/0x170 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0107aea>] do_notify_resume+0x3a/0x3c >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0107c8b>] work_notifysig+0x13/0x18 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Code: 53 0c 8b 42 04 c7 04 24 e2 f2 47 c0 40 89 44 24 04 >>e8 4e d1 fc ff 8b >> 43 10 c7 04 24 f9 f2 47 c0 89 44 24 04 e8 3b d1 fc ff eb 84 <0f> 0b 2b >>02 b7 f2 47 c0 eb >>80 eb 0d 90 90 90 90 90 90 90 90 90 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Bad page state in process ''qemu-dm'' >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: page:c131b340 flags:0x00000004 mapping:00000000 >>mapcount:-1 count:0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Trying to fix it up, but a reboot is needed >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Backtrace: >> >> >>== dom0 Serial Console output: ==>> >>Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 >>00 00 55 89 e5 83 ec >> 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b >>4b 04 89 01 89 48 04 >> c7 43 04 00 02 >> <1>Fixing recursive fault but reboot is needed! >>Unable to handle kernel NULL pointer dereference at virtual address 00000000 >> printing eip: >>c01174e3 >>*pde = ma 00000000 pa 55555000 >>Oops: 0002 [#18] >>Modules linked in: thermal processor fan button battery ac sworks_agp >>agpgart >>CPU: 0 >>EIP: 0061:[<c01174e3>] Tainted: G B VLI >>EFLAGS: 00010096 (2.6.16-rc2-xen0) >>EIP is at dequeue_task+0x13/0x50 >>eax: 00000000 ebx: dbc02530 ecx: dbc02530 edx: dbc02530 >>esi: 00000000 edi: 00000010 ebp: d6f184a4 esp: d6f1849c >>ds: 007b es: 007b ss: 0069 >>Process qemu-dm (pid: 14455, threadinfo=d6f18000 task=dbc02530) >>Stack: <0>dbc02530 dbc02530 d6f184b8 c011780e dbc02530 00000000 dbc02530 >>d6f1852c >> c045f9d8 dbc02530 c05d5ca0 00000030 00000001 d6f18550 c0107fa1 >>c04a3b9a >> c0107bf1 00000004 00000001 c0107ffa 069f6bc7 2a8e2801 00000156 >>dbc02530 >>Call Trace: >> [<c010810a>] show_stack_log_lvl+0xaa/0xe0 >> [<c01082f1>] show_registers+0x161/0x1e0 >> [<c01084e9>] die+0xd9/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c0108619>] do_trap+0x89/0xd0 >> [<c0108998>] do_invalid_op+0xb8/0xd0 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c01147e2>] __pgd_pin+0x32/0x50 >> [<c0114894>] mm_pin+0x14/0x20 >> [<c045fa08>] schedule+0x498/0x6f0 >> [<c011dd71>] do_exit+0x1e1/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c0108619>] do_trap+0x89/0xd0 >> [<c0108998>] do_invalid_op+0xb8/0xd0 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c01147e2>] __pgd_pin+0x32/0x50 >> [<c0114894>] mm_pin+0x14/0x20 >> [<c045fa08>] schedule+0x498/0x6f0 >> [<c04604b0>] schedule_timeout+0x50/0xa0 >> [<c016e057>] do_select+0x277/0x2e0 >> [<c016e2d3>] core_sys_select+0x1c3/0x310 >> [<c016e4d1>] sys_select+0xb1/0x160 >> [<c0107bf1>] syscall_call+0x7/0xb >>Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 >>00 00 55 89 e5 83 ec >> 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b >>4b 04 89 01 89 48 04 >> c7 43 04 00 02 >> <1>Fixing recursive fault but reboot is needed! >>Unable to handle kernel NULL pointer dereference at virtual address 00000000 >> printing eip: >>c01174e3 >>*pde = ma 00000000 pa 55555000 >>Oops: 0002 [#19] >>Modules linked in: thermal processor fan button battery ac sworks_agp >>agpgart >>CPU: 0 >>EIP: 0061:[<c01174e3>] Tainted: G B VLI >>EFLAGS: 00010082 (2.6.16-rc2-xen0) >>EIP is at dequeue_task+0x13/0x50 >>eax: 00000000 ebx: dbc02530 ecx: dbc02530 edx: dbc02530 >>esi: 00000000 edi: 00000010 ebp: d6f1830c esp: d6f18304 >>ds: 007b es: 007b ss: 0069 >>Unable to handle kernel NULL pointer dereference at virtual address 00000078 >> printing eip: >>c0114f08 >>*pde = ma 00000000 pa 55555000 >>Oops: 0000 [#20] >>Modules linked in: thermal processor fan button battery ac sworks_agp >>agpgart >>CPU: 0 >>EIP: 0061:[<c0114f08>] Tainted: G B VLI >>EFLAGS: 00010046 (2.6.16-rc2-xen0) >>EIP is at do_page_fault+0xb8/0x651 >>eax: d6efc000 ebx: 0f00fff0 ecx: 0000007b edx: 00000000 >>esi: 0000000d edi: c0114e50 ebp: d6efc0f8 esp: d6efc0a0 >>ds: 007b es: 007b ss: 0069 >>Unable to handle kernel paging request at virtual address 27bd808e >> printing eip: >>c0114f08 >>*pde = ma 00000000 pa 55555000 >>Recursive die() failure, output suppressed >> <0>Kernel panic - not syncing: Fatal exception in interrupt >> (XEN) Domain 0 shutdown: rebooting machine. >> >>plain text document attachment (dom0-reboot.txt) >>Issues affecting HVM: >> >>* During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot. >> >>Details: >> >>== Last entries of xm-test .output file: =>> >>Console executing: [''/usr/sbin/xm'', ''xm'', ''console'', ''11_create_5''] >>[11_create_5] Sending `foo'' >>[11_create_5] Sending `ls'' >>[11_create_5] Sending `echo $?'' >>[5] Started 11_create_5 >>[dom0] Running `xm create /tmp/xm-test.conf'' >>Using config file "/tmp/xm-test.conf". >>Started domain 11_create_6 >>[dom0] Waiting 20 seconds for domU boot... >>Console executing: [''/usr/sbin/xm'', ''xm'', ''console'', ''11_create_6''] >>[11_create_6] Sending `foo'' >>[11_create_6] Sending `ls'' >>[11_create_6] Sending `echo $?'' >>[6] Started 11_create_6 >>[dom0] Running `xm create /tmp/xm-test.conf'' >>Using config file "/tmp/xm-test.conf". >>Started domain 11_create_7 >>[dom0] Waiting 20 seconds for domU boot... >>Console executing: [''/usr/sbin/xm'', ''xm'', ''consolvmxdom2:/tmp/xm-test-results/021306-vmxdom2 >> >>== HVM domain output during before crash: =>> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: Call Trace: >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c010810a>] show_stack_log_lvl+0xaa/0xe0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01082f1>] show_registers+0x161/0x1e0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01084e9>] die+0xd9/0x180 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0108619>] do_trap+0x89/0xd0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0108998>] do_invalid_op+0xb8/0xd0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0107d67>] error_code+0x2b/0x30 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c0147390>] zap_pte_range+0x1b0/0x310 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c01475d9>] unmap_page_range+0xe9/0x1b0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ... >>vmxdom2 kernel: [<c014777a>] unmap_vmas+0xda/0x1a0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c014cbde>] exit_mmap+0x6e/0xf0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0119707>] mmput+0x27/0x80 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011d45b>] exit_mm+0x6b/0xe0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011dc79>] do_exit+0xe9/0x380 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c011df86>] do_group_exit+0x36/0x90 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c01277a9>] get_signal_to_deliver+0x269/0x2f0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c01079ab>] do_signal+0x6b/0x170 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0107aea>] do_notify_resume+0x3a/0x3c >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: [<c0107c8b>] work_notifysig+0x13/0x18 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Code: 53 0c 8b 42 04 c7 04 24 e2 f2 47 c0 40 89 44 24 04 e8 4e d1 fc ff 8b >> 43 10 c7 04 24 f9 f2 47 c0 89 44 24 04 e8 3b d1 fc ff eb 84 <0f> 0b 2b 02 b7 f2 47 c0 eb >>80 eb 0d 90 90 90 90 90 90 90 90 90 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Bad page state in process ''qemu-dm'' >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: page:c131b340 flags:0x00000004 mapping:00000000 mapcount:-1 count:0 >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Trying to fix it up, but a reboot is needed >> >>Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ... >>vmxdom2 kernel: Backtrace: >> >> >>== dom0 Serial Console output: ==>> >>Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec >> 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 >> c7 43 04 00 02 >> <1>Fixing recursive fault but reboot is needed! >>Unable to handle kernel NULL pointer dereference at virtual address 00000000 >> printing eip: >>c01174e3 >>*pde = ma 00000000 pa 55555000 >>Oops: 0002 [#18] >>Modules linked in: thermal processor fan button battery ac sworks_agp agpgart >>CPU: 0 >>EIP: 0061:[<c01174e3>] Tainted: G B VLI >>EFLAGS: 00010096 (2.6.16-rc2-xen0) >>EIP is at dequeue_task+0x13/0x50 >>eax: 00000000 ebx: dbc02530 ecx: dbc02530 edx: dbc02530 >>esi: 00000000 edi: 00000010 ebp: d6f184a4 esp: d6f1849c >>ds: 007b es: 007b ss: 0069 >>Process qemu-dm (pid: 14455, threadinfo=d6f18000 task=dbc02530) >>Stack: <0>dbc02530 dbc02530 d6f184b8 c011780e dbc02530 00000000 dbc02530 d6f1852c >> c045f9d8 dbc02530 c05d5ca0 00000030 00000001 d6f18550 c0107fa1 c04a3b9a >> c0107bf1 00000004 00000001 c0107ffa 069f6bc7 2a8e2801 00000156 dbc02530 >>Call Trace: >> [<c010810a>] show_stack_log_lvl+0xaa/0xe0 >> [<c01082f1>] show_registers+0x161/0x1e0 >> [<c01084e9>] die+0xd9/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c011522c>] do_page_fault+0x3dc/0x651 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c011780e>] deactivate_task+0x1e/0x30 >> [<c045f9d8>] schedule+0x468/0x6f0 >> [<c011de8c>] do_exit+0x2fc/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c0108619>] do_trap+0x89/0xd0 >> [<c0108998>] do_invalid_op+0xb8/0xd0 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c01147e2>] __pgd_pin+0x32/0x50 >> [<c0114894>] mm_pin+0x14/0x20 >> [<c045fa08>] schedule+0x498/0x6f0 >> [<c011dd71>] do_exit+0x1e1/0x380 >> [<c010858b>] die+0x17b/0x180 >> [<c0108619>] do_trap+0x89/0xd0 >> [<c0108998>] do_invalid_op+0xb8/0xd0 >> [<c0107d67>] error_code+0x2b/0x30 >> [<c01147e2>] __pgd_pin+0x32/0x50 >> [<c0114894>] mm_pin+0x14/0x20 >> [<c045fa08>] schedule+0x498/0x6f0 >> [<c04604b0>] schedule_timeout+0x50/0xa0 >> [<c016e057>] do_select+0x277/0x2e0 >> [<c016e2d3>] core_sys_select+0x1c3/0x310 >> [<c016e4d1>] sys_select+0xb1/0x160 >> [<c0107bf1>] syscall_call+0x7/0xb >>Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec >> 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 >> c7 43 04 00 02 >> <1>Fixing recursive fault but reboot is needed! >>Unable to handle kernel NULL pointer dereference at virtual address 00000000 >> printing eip: >>c01174e3 >>*pde = ma 00000000 pa 55555000 >>Oops: 0002 [#19] >>Modules linked in: thermal processor fan button battery ac sworks_agp agpgart >>CPU: 0 >>EIP: 0061:[<c01174e3>] Tainted: G B VLI >>EFLAGS: 00010082 (2.6.16-rc2-xen0) >>EIP is at dequeue_task+0x13/0x50 >>eax: 00000000 ebx: dbc02530 ecx: dbc02530 edx: dbc02530 >>esi: 00000000 edi: 00000010 ebp: d6f1830c esp: d6f18304 >>ds: 007b es: 007b ss: 0069 >>Unable to handle kernel NULL pointer dereference at virtual address 00000078 >> printing eip: >>c0114f08 >>*pde = ma 00000000 pa 55555000 >>Oops: 0000 [#20] >>Modules linked in: thermal processor fan button battery ac sworks_agp agpgart >>CPU: 0 >>EIP: 0061:[<c0114f08>] Tainted: G B VLI >>EFLAGS: 00010046 (2.6.16-rc2-xen0) >>EIP is at do_page_fault+0xb8/0x651 >>eax: d6efc000 ebx: 0f00fff0 ecx: 0000007b edx: 00000000 >>esi: 0000000d edi: c0114e50 ebp: d6efc0f8 esp: d6efc0a0 >>ds: 007b es: 007b ss: 0069 >>Unable to handle kernel paging request at virtual address 27bd808e >> printing eip: >>c0114f08 >>*pde = ma 00000000 pa 55555000 >>Recursive die() failure, output suppressed >> <0>Kernel panic - not syncing: Fatal exception in interrupt >> (XEN) Domain 0 shutdown: rebooting machine. >> >>_______________________________________________________ >>ltc-xen-fullvirt mailing list <ltc-xen-fullvirt@linux.ibm.com> >>To unsubscribe from the list, change your list options >>or if you have forgotten your list password visit: >>http://linux.ibm.com/mailman/listinfo/ltc-xen-fullvirt >> >> > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
changeset: 8885:20b95517cbf1 tag: tip user: kaf24@firebug.cl.cam.ac.uk date: Sun Feb 19 02:06:44 2006 +0100 summary: Fix get_mfn_from_gpfn_foreign for HVM guests. x460: x86_32: Status: - dom0 boots fine - xend loads fine - single HVM domain loads fine - Multiple HVM domains load fine - destruction of any HVM domain causes dom0 to reboot Issues affecting HVM: * During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot. Details: xend.log right before reboot:>> [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destroy: domid=78 [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(78) [2006-02-20 01:25:04 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destr oy: domid=79 [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(79) [2006-02-20 01:25:04 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destr oy: domid=80 [2006-02-20 01:25:04 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(80) [2006-02-20 01:25:05 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes [2006-02-20 01:25:05 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destr oy: domid=81 [2006-02-20 01:25:05 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(81) [2006-02-20 01:25:05 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes [2006-02-20 01:25:05 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destr oy: domid=82 [2006-02-20 01:25:05 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(82) [2006-02-20 01:25:05 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes [2006-02-20 01:25:06 xend.XendDomainInfo] DEBUG (XendDomainInfo:1272) XendDomainInfo.destr oy: domid=83 [2006-02-20 01:25:06 xend.XendDomainInfo] DEBUG (XendDomainInfo:1280) XendDomainInfo.destr oyDomain(83) [2006-02-20 01:25:06 xend.XendDomainInfo] ERROR (XendDomainInfo:1222) XendDomainInfo.clean up: image.destroy() failed. Traceback (most recent call last): File "/usr/lib/python2.3/xen/xend/XendDomainInfo.py", line 1220, in cleanupDomain self.image.destroy() File "/usr/lib/python2.3/xen/xend/image.py", line 372, in destroy os.waitpid(self.pid, 0) OSError: [Errno 10] No child processes ****** NOTE: after this, the system rebooted ************ output file: PASS: 11_create_concurrent_pos.test *** Cleaning all running domU''s [dom0] Running `xm list'' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 500 1 r----- 322.4 11_create_0 60 24 1 -b---- 68.7 11_create_1 61 24 1 -b---- 67.4 11_create_2 62 24 1 -b---- 66.0 11_create_3 63 24 1 -b---- 66.1 11_create_4 64 24 1 -b---- 65.3 11_create_5 65 24 1 -b---- 63.9 11_create_6 66 24 1 -b---- 61.3 11_create_7 67 24 1 -b---- 62.9 11_create_8 68 24 1 -b---- 58.5 11_create_9 69 24 1 -b---- 58.3 11_create_10 70 24 1 -b---- 57.2 11_create_11 71 24 1 -b---- 55.9 11_create_12 72 24 1 r----- 53.7 11_create_13 73 24 1 r----- 53.1 11_create_14 74 24 1 -b---- 50.8 11_create_15 75 24 1 -b---- 51.8 11_create_16 76 24 1 ------ 50.8 11_create_17 77 24 1 -b---- 47.9 11_create_18 78 24 1 -b---- 45.9 11_create_19 79 24 1 -b---- 46.4 11_create_20 80 24 1 -b---- 43.0 11_create_21 81 24 1 -b---- 42.0 11_create_22 82 24 1 -b---- 41.1 11_create_23 83 24 1 -b---- 41.9 11_create_24 84 24 1 r----- 38.8 11_create_25 85 24 1 -b---- 37.4 11_create_26 86 24 1 -b---- 36.0 11_create_27 87 24 1 -b---- 34.9 11_create_28 88 24 1 r----- 32.9 11_create_29 89 24 1 -b---- 33.9 11_create_30 90 24 1 -b---- 30.4 11_create_31 91 24 1 -b---- 30.6 11_create_32 92 24 1 -b---- 28.4 11_create_33 93 24 1 -b---- 27.1 11_create_34 94 24 1 -b---- 26.4 11_create_35 95 24 1 -b---- 24.6 11_create_36 96 24 1 -b---- 23.7 11_create_37 97 24 1 -b---- 22.3 11_create_38 98 24 1 -b---- 20.2 11_create_39 99 24 1 -b---- 19.7 11_create_40 100 24 1 -b---- 17.9 11_create_41 101 24 1 -b---- 16.9 11_create_42 102 24 1 -b---- 15.2 11_create_43 103 24 1 -b---- 14.2 11_create_44 104 24 1 -b---- 12.7 11_create_45 105 24 1 -b---- 11.4 11_create_46 106 24 1 -b---- 9.9 11_create_47 107 24 1 -b---- 8.8 11_create_48 108 24 1 -b---- 7.4 11_create_49 109 24 1 -b---- 6.2 [dom0] Running `xm destroy 11_create_0'' [dom0] Running `xm destroy 11_create_1'' [dom0] Running `xm destroy 11_create_2'' [dom0] Running `xm destroy 11_create_3'' [dom0] Running `xm destroy 11_create_4'' [dom0] Running `xm destroy 11_create_5'' [dom0] Running `xm destroy 11_create_6'' [dom0] Running `xm destroy 11_create_7'' [dom0] Running `xm destroy 11_create_8'' [dom0] Running `xm destroy 11_create_9'' [dom0] Running `xm destroy 11_create_10'' [dom0] Running `xm destroy 11_create_11'' [dom0] Running `xm destroy 11_create_12'' [dom0] Running `xm destroy 11_cr *** NOTE: it passed 11_create and rebooted during destruction on VMX domains. regards, Rick Gonzalez _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Reasonably Related Threads
- Kernel BUG
- dovecot servers hanging with fuse/glusterfs errors
- Error with samba-tool when upgrading from 3 to 4 / NT_STATUS_IO_TIMEOUT
- dovecot Panic: file mail-index-sync-update.c: line 1013
- [xen-unstable] Commit 2ca9fbd739b8a72b16dd790d0fff7b75f5488fb8 AMD IOMMU: allocate IRTE entries instead of using a static mapping, makes dom0 boot process stall several times.