Hi, Running Xen unstable on a Dell Optiplex 755, after starting and shutting down a few hvm''s the system crashes with the following message: This is with Xensource 2.6.18.8 kernel: (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff (XEN) Xen BUG at page_alloc.c:409 (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 1 (XEN) RIP: e008:[<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: 0000000000000001 (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: 0000000000000004 (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: 0000000000000200 (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000026f0 (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 cs: e008 (XEN) Xen stack trace from rsp=ffff830127fdfc58: (XEN) 0000000900000001 0000000000000098 0000000000000200 0000000100000001 (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc ffff830127fdff28 (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 0000000000000000 (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 ffff830127fdff28 (XEN) 0000000000000006 0000000044803760 00000000448037c0 0000000000000000 (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 0000000000000000 (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000 (XEN) 0000000000000200 0000000000000001 0000000000000000 0000000000000001 (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 ffff828c80115811 (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 ffff828c8013c559 (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 ffff828c8013c5f6 (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 ffff828c8011d73e (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 ffff83011d9c4000 (XEN) 0000000000000282 0000000400000009 0000000044803750 ffff8300cfdfc030 (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 ffff828c80119cb7 (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 ffff830127fdfe68 (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 ffff828c8014c173 (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 00000000000cce00 (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 0000000000000004 (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 ffff83011d9c4000 (XEN) Xen call trace: (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Xen BUG at page_alloc.c:409 (XEN) **************************************** (XEN) And here again running opensuse 2.6.27 kernel: (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff (XEN) Xen BUG at page_alloc.c:536 (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 1 (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: 0000000000000001 (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: 0000000000000004 (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: 0000000000000000 (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 (XEN) Xen stack trace from rsp=ffff830127fdfe40: (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 0000000000000001 (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 ffff82840236e3c0 (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 ffff828c8011329a (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 ffff828c80297880 (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 ffff828c8011ba21 (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 ffff8300cee8a000 (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 ffff8300cfaee000 (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 ffffffff8070f1c0 (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 0000000000000246 (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 0000000000000000 (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef 00000000deadbeef (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 0000000000000246 (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef fddffff4f3b9beef (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 ffff8300cfaee000 (XEN) Xen call trace: (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Xen BUG at page_alloc.c:536 (XEN) **************************************** (XEN) (XEN) Reboot in five seconds... Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks. Our testing has showed this up too. The cause hasn''t been tracked down yet unfortunately. -- Keir On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote:> Hi, > > Running Xen unstable on a Dell Optiplex 755, after starting and > shutting down a few hvm''s the system crashes with the following > message: > > This is with Xensource 2.6.18.8 kernel: > > (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff > (XEN) Xen BUG at page_alloc.c:409 > (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- > (XEN) CPU: 1 > (XEN) RIP: e008:[<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 > (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor > (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: 0000000000000001 > (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff828c801fedec > (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: 0000000000000004 > (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 > (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: 0000000000000200 > (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000026f0 > (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 > (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 cs: e008 > (XEN) Xen stack trace from rsp=ffff830127fdfc58: > (XEN) 0000000900000001 0000000000000098 0000000000000200 0000000100000001 > (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc ffff830127fdff28 > (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 0000000000000000 > (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 ffff830127fdff28 > (XEN) 0000000000000006 0000000044803760 00000000448037c0 0000000000000000 > (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 0000000000000000 > (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000 > (XEN) 0000000000000200 0000000000000001 0000000000000000 0000000000000001 > (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 ffff828c80115811 > (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 ffff828c8013c559 > (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 ffff828c8013c5f6 > (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 ffff828c8011d73e > (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 ffff83011d9c4000 > (XEN) 0000000000000282 0000000400000009 0000000044803750 ffff8300cfdfc030 > (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 ffff828c80119cb7 > (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 ffff830127fdfe68 > (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 ffff828c8014c173 > (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 00000000000cce00 > (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 0000000000000004 > (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 ffff83011d9c4000 > (XEN) Xen call trace: > (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 > (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b > (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 > (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 > (XEN) > (XEN) > (XEN) **************************************** > (XEN) Panic on CPU 1: > (XEN) Xen BUG at page_alloc.c:409 > (XEN) **************************************** > (XEN) > > And here again running opensuse 2.6.27 kernel: > > (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff > (XEN) Xen BUG at page_alloc.c:536 > (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- > (XEN) CPU: 1 > (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 > (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor > (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: 0000000000000001 > (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: ffff828c801fedec > (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: 0000000000000004 > (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 > (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: 0000000000000000 > (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 > (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 > (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 > (XEN) Xen stack trace from rsp=ffff830127fdfe40: > (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 0000000000000001 > (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 ffff82840236e3c0 > (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 ffff828c8011329a > (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 ffff828c80297880 > (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 ffff828c8011ba21 > (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 ffff8300cee8a000 > (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 ffff8300cfaee000 > (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 ffffffff8070f1c0 > (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 0000000000000246 > (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 0000000000000000 > (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef 00000000deadbeef > (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 0000000000000246 > (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef fddffff4f3b9beef > (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 ffff8300cfaee000 > (XEN) Xen call trace: > (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 > (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c > (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 > (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f > (XEN) > (XEN) > (XEN) **************************************** > (XEN) Panic on CPU 1: > (XEN) Xen BUG at page_alloc.c:536 > (XEN) **************************************** > (XEN) > (XEN) Reboot in five seconds... > > > Andy > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thanks for the log, seems the count info is -1UL in such situation, I think it may because some change to count_info, and I will try to check it. Thanks Yunhong Jiang>-----Original Message----- >From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >Sent: 2009年3月13日 18:39 >To: Andrew Lyon; Xen-devel >Cc: Jiang, Yunhong >Subject: Re: [Xen-devel] Xen unstable crash > >Thanks. Our testing has showed this up too. The cause hasn't >been tracked >down yet unfortunately. > > -- Keir > >On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: > >> Hi, >> >> Running Xen unstable on a Dell Optiplex 755, after starting and >> shutting down a few hvm's the system crashes with the following >> message: >> >> This is with Xensource 2.6.18.8 kernel: >> >> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff >> (XEN) Xen BUG at page_alloc.c:409 >> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >> (XEN) CPU: 1 >> (XEN) RIP: e008:[<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >> (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor >> (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: >0000000000000001 >> (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: >ffff828c801fedec >> (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: >0000000000000004 >> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >0000000000000010 >> (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: >0000000000000200 >> (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: >00000000000026f0 >> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 >> (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 cs: e008 >> (XEN) Xen stack trace from rsp=ffff830127fdfc58: >> (XEN) 0000000900000001 0000000000000098 0000000000000200 >0000000100000001 >> (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc >ffff830127fdff28 >> (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 >0000000000000000 >> (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 >ffff830127fdff28 >> (XEN) 0000000000000006 0000000044803760 00000000448037c0 >0000000000000000 >> (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 >0000000000000000 >> (XEN) 0000000000000000 0000000000000000 0000000000000000 >0000000000000000 >> (XEN) 0000000000000200 0000000000000001 0000000000000000 >0000000000000001 >> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 >ffff828c80115811 >> (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 >ffff828c8013c559 >> (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 >ffff828c8013c5f6 >> (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 >ffff828c8011d73e >> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 >ffff83011d9c4000 >> (XEN) 0000000000000282 0000000400000009 0000000044803750 >ffff8300cfdfc030 >> (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 >ffff828c80119cb7 >> (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 >ffff830127fdfe68 >> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 >ffff828c8014c173 >> (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 >00000000000cce00 >> (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 >0000000000000004 >> (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 >ffff83011d9c4000 >> (XEN) Xen call trace: >> (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >> (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b >> (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 >> (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 >> (XEN) >> (XEN) >> (XEN) **************************************** >> (XEN) Panic on CPU 1: >> (XEN) Xen BUG at page_alloc.c:409 >> (XEN) **************************************** >> (XEN) >> >> And here again running opensuse 2.6.27 kernel: >> >> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff >> (XEN) Xen BUG at page_alloc.c:536 >> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >> (XEN) CPU: 1 >> (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >> (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor >> (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: >0000000000000001 >> (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: >ffff828c801fedec >> (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: >0000000000000004 >> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >0000000000000010 >> (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: >0000000000000000 >> (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: >00000000000026f0 >> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 >> (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 >> (XEN) Xen stack trace from rsp=ffff830127fdfe40: >> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 >0000000000000001 >> (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 >ffff82840236e3c0 >> (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 >ffff828c8011329a >> (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 >ffff828c80297880 >> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 >ffff828c8011ba21 >> (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 >ffff8300cee8a000 >> (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 >ffff8300cfaee000 >> (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 >ffffffff8070f1c0 >> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 >0000000000000246 >> (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 >0000000000000000 >> (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef >00000000deadbeef >> (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 >0000000000000246 >> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef >fddffff4f3b9beef >> (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 >ffff8300cfaee000 >> (XEN) Xen call trace: >> (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >> (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c >> (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 >> (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f >> (XEN) >> (XEN) >> (XEN) **************************************** >> (XEN) Panic on CPU 1: >> (XEN) Xen BUG at page_alloc.c:536 >> (XEN) **************************************** >> (XEN) >> (XEN) Reboot in five seconds... >> >> >> Andy >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
If you can reproduce this bug, it''s worth trying to revert c/s 19285 and try again: hg export 19285 | patch -Rp1 To put the tree back into clean state afterwards: hg diff | patch -Rp1 If the bug still reproduces another possible culprit is c/s 19317. -- Keir On 13/03/2009 12:55, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:> Thanks for the log, seems the count info is -1UL in such situation, I think > it may because some change to count_info, and I will try to check it. > > Thanks > Yunhong Jiang > >> -----Original Message----- >> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >> Sent: 2009年3月13日 18:39 >> To: Andrew Lyon; Xen-devel >> Cc: Jiang, Yunhong >> Subject: Re: [Xen-devel] Xen unstable crash >> >> Thanks. Our testing has showed this up too. The cause hasn''t >> been tracked >> down yet unfortunately. >> >> -- Keir >> >> On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >> >>> Hi, >>> >>> Running Xen unstable on a Dell Optiplex 755, after starting and >>> shutting down a few hvm''s the system crashes with the following >>> message: >>> >>> This is with Xensource 2.6.18.8 kernel: >>> >>> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff >>> (XEN) Xen BUG at page_alloc.c:409 >>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>> (XEN) CPU: 1 >>> (XEN) RIP: e008:[<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >>> (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor >>> (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: >> 0000000000000001 >>> (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: >> ffff828c801fedec >>> (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: >> 0000000000000004 >>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >> 0000000000000010 >>> (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: >> 0000000000000200 >>> (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: >> 00000000000026f0 >>> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 >>> (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 cs: e008 >>> (XEN) Xen stack trace from rsp=ffff830127fdfc58: >>> (XEN) 0000000900000001 0000000000000098 0000000000000200 >> 0000000100000001 >>> (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc >> ffff830127fdff28 >>> (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 >> 0000000000000000 >>> (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 >> ffff830127fdff28 >>> (XEN) 0000000000000006 0000000044803760 00000000448037c0 >> 0000000000000000 >>> (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 >> 0000000000000000 >>> (XEN) 0000000000000000 0000000000000000 0000000000000000 >> 0000000000000000 >>> (XEN) 0000000000000200 0000000000000001 0000000000000000 >> 0000000000000001 >>> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 >> ffff828c80115811 >>> (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 >> ffff828c8013c559 >>> (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 >> ffff828c8013c5f6 >>> (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 >> ffff828c8011d73e >>> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 >> ffff83011d9c4000 >>> (XEN) 0000000000000282 0000000400000009 0000000044803750 >> ffff8300cfdfc030 >>> (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 >> ffff828c80119cb7 >>> (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 >> ffff830127fdfe68 >>> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 >> ffff828c8014c173 >>> (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 >> 00000000000cce00 >>> (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 >> 0000000000000004 >>> (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 >> ffff83011d9c4000 >>> (XEN) Xen call trace: >>> (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >>> (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b >>> (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 >>> (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 >>> (XEN) >>> (XEN) >>> (XEN) **************************************** >>> (XEN) Panic on CPU 1: >>> (XEN) Xen BUG at page_alloc.c:409 >>> (XEN) **************************************** >>> (XEN) >>> >>> And here again running opensuse 2.6.27 kernel: >>> >>> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff >>> (XEN) Xen BUG at page_alloc.c:536 >>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>> (XEN) CPU: 1 >>> (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>> (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor >>> (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: >> 0000000000000001 >>> (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: >> ffff828c801fedec >>> (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: >> 0000000000000004 >>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >> 0000000000000010 >>> (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: >> 0000000000000000 >>> (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: >> 00000000000026f0 >>> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 >>> (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008 >>> (XEN) Xen stack trace from rsp=ffff830127fdfe40: >>> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 >> 0000000000000001 >>> (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 >> ffff82840236e3c0 >>> (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 >> ffff828c8011329a >>> (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 >> ffff828c80297880 >>> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 >> ffff828c8011ba21 >>> (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 >> ffff8300cee8a000 >>> (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 >> ffff8300cfaee000 >>> (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 >> ffffffff8070f1c0 >>> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 >> 0000000000000246 >>> (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 >> 0000000000000000 >>> (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef >> 00000000deadbeef >>> (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 >> 0000000000000246 >>> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef >> fddffff4f3b9beef >>> (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 >> ffff8300cfaee000 >>> (XEN) Xen call trace: >>> (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>> (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c >>> (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 >>> (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f >>> (XEN) >>> (XEN) >>> (XEN) **************************************** >>> (XEN) Panic on CPU 1: >>> (XEN) Xen BUG at page_alloc.c:536 >>> (XEN) **************************************** >>> (XEN) >>> (XEN) Reboot in five seconds... >>> >>> >>> Andy >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >> >> >>_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Originally I suspect the potential reason is put_page is called wrongly (maybe twice without sync), and cause the count_info be -1. (For example, put_page a free page), but I didn't find such potential usage. Can you please add some printk in put_page() to check if the count_info is 0 already when called? Thanks Yunhong Jiang>-----Original Message----- >From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >Sent: 2009年3月13日 22:19 >To: Jiang, Yunhong; Andrew Lyon; Xen-devel >Subject: Re: [Xen-devel] Xen unstable crash > >If you can reproduce this bug, it's worth trying to revert c/s >19285 and try >again: > hg export 19285 | patch -Rp1 >To put the tree back into clean state afterwards: > hg diff | patch -Rp1 > >If the bug still reproduces another possible culprit is c/s 19317. > > -- Keir > >On 13/03/2009 12:55, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > >> Thanks for the log, seems the count info is -1UL in such >situation, I think >> it may because some change to count_info, and I will try to check it. >> >> Thanks >> Yunhong Jiang >> >>> -----Original Message----- >>> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >>> Sent: 2009年3月13日 18:39 >>> To: Andrew Lyon; Xen-devel >>> Cc: Jiang, Yunhong >>> Subject: Re: [Xen-devel] Xen unstable crash >>> >>> Thanks. Our testing has showed this up too. The cause hasn't >>> been tracked >>> down yet unfortunately. >>> >>> -- Keir >>> >>> On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> Running Xen unstable on a Dell Optiplex 755, after starting and >>>> shutting down a few hvm's the system crashes with the following >>>> message: >>>> >>>> This is with Xensource 2.6.18.8 kernel: >>>> >>>> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:409 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>>> (XEN) CPU: 1 >>>> (XEN) RIP: e008:[<ffff828c8011206f>] >alloc_heap_pages+0x35a/0x486 >>>> (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor >>>> (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: >>> 0000000000000001 >>>> (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: >>> ffff828c801fedec >>>> (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: >>> 0000000000000004 >>>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >>> 0000000000000010 >>>> (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: >>> 0000000000000200 >>>> (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: >>> 00000000000026f0 >>>> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 >>>> (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfc58: >>>> (XEN) 0000000900000001 0000000000000098 0000000000000200 >>> 0000000100000001 >>>> (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc >>> ffff830127fdff28 >>>> (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 >>> 0000000000000000 >>>> (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 >>> ffff830127fdff28 >>>> (XEN) 0000000000000006 0000000044803760 00000000448037c0 >>> 0000000000000000 >>>> (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 >>> 0000000000000000 >>>> (XEN) 0000000000000000 0000000000000000 0000000000000000 >>> 0000000000000000 >>>> (XEN) 0000000000000200 0000000000000001 0000000000000000 >>> 0000000000000001 >>>> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 >>> ffff828c80115811 >>>> (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 >>> ffff828c8013c559 >>>> (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 >>> ffff828c8013c5f6 >>>> (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 >>> ffff828c8011d73e >>>> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 >>> ffff83011d9c4000 >>>> (XEN) 0000000000000282 0000000400000009 0000000044803750 >>> ffff8300cfdfc030 >>>> (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 >>> ffff828c80119cb7 >>>> (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 >>> ffff830127fdfe68 >>>> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 >>> ffff828c8014c173 >>>> (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 >>> 00000000000cce00 >>>> (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 >>> 0000000000000004 >>>> (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 >>> ffff83011d9c4000 >>>> (XEN) Xen call trace: >>>> (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >>>> (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b >>>> (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 >>>> (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 >>>> (XEN) >>>> (XEN) >>>> (XEN) **************************************** >>>> (XEN) Panic on CPU 1: >>>> (XEN) Xen BUG at page_alloc.c:409 >>>> (XEN) **************************************** >>>> (XEN) >>>> >>>> And here again running opensuse 2.6.27 kernel: >>>> >>>> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:536 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>>> (XEN) CPU: 1 >>>> (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>>> (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor >>>> (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: >>> 0000000000000001 >>>> (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: >>> ffff828c801fedec >>>> (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: >>> 0000000000000004 >>>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >>> 0000000000000010 >>>> (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: >>> 0000000000000000 >>>> (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: >>> 00000000000026f0 >>>> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 >>>> (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfe40: >>>> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 >>> 0000000000000001 >>>> (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 >>> ffff82840236e3c0 >>>> (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 >>> ffff828c8011329a >>>> (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 >>> ffff828c80297880 >>>> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 >>> ffff828c8011ba21 >>>> (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 >>> ffff8300cee8a000 >>>> (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 >>> ffff8300cfaee000 >>>> (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 >>> ffffffff8070f1c0 >>>> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 >>> 0000000000000246 >>>> (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 >>> 0000000000000000 >>>> (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef >>> 00000000deadbeef >>>> (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 >>> 0000000000000246 >>>> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef >>> fddffff4f3b9beef >>>> (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 >>> ffff8300cfaee000 >>>> (XEN) Xen call trace: >>>> (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>>> (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c >>>> (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 >>>> (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f >>>> (XEN) >>>> (XEN) >>>> (XEN) **************************************** >>>> (XEN) Panic on CPU 1: >>>> (XEN) Xen BUG at page_alloc.c:536 >>>> (XEN) **************************************** >>>> (XEN) >>>> (XEN) Reboot in five seconds... >>>> >>>> >>>> Andy >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>> >>> >>> > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I'm not sure if we can temporaryly print some information (or BUG) in put_page() when the count_info is 0 already. (I think that should be bug even in the long run, since the get_page will fail is count_info is overflow). As I can't reproduce this issue locally, I have to guess the possibility root cause. If there is a spurious put_page(), it may works well before patch 19285/19286 (Especially when the windows in the two put_page is quite small), since originally free_heap_pages() will not check count_info. However, it will trigger the ASSERT() in free_heap_pages() now for sure. And it can explain the BUG() in alloc page, although I can't explain why it works before. Thanks Yunhong Jiang>-----Original Message----- >From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >Sent: 2009年3月13日 22:19 >To: Jiang, Yunhong; Andrew Lyon; Xen-devel >Subject: Re: [Xen-devel] Xen unstable crash > >If you can reproduce this bug, it's worth trying to revert c/s >19285 and try >again: > hg export 19285 | patch -Rp1 >To put the tree back into clean state afterwards: > hg diff | patch -Rp1 > >If the bug still reproduces another possible culprit is c/s 19317. > > -- Keir > >On 13/03/2009 12:55, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > >> Thanks for the log, seems the count info is -1UL in such >situation, I think >> it may because some change to count_info, and I will try to check it. >> >> Thanks >> Yunhong Jiang >> >>> -----Original Message----- >>> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >>> Sent: 2009年3月13日 18:39 >>> To: Andrew Lyon; Xen-devel >>> Cc: Jiang, Yunhong >>> Subject: Re: [Xen-devel] Xen unstable crash >>> >>> Thanks. Our testing has showed this up too. The cause hasn't >>> been tracked >>> down yet unfortunately. >>> >>> -- Keir >>> >>> On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> Running Xen unstable on a Dell Optiplex 755, after starting and >>>> shutting down a few hvm's the system crashes with the following >>>> message: >>>> >>>> This is with Xensource 2.6.18.8 kernel: >>>> >>>> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:409 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>>> (XEN) CPU: 1 >>>> (XEN) RIP: e008:[<ffff828c8011206f>] >alloc_heap_pages+0x35a/0x486 >>>> (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor >>>> (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: >>> 0000000000000001 >>>> (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: >>> ffff828c801fedec >>>> (XEN) rbp: ffff830127fdfcb8 rsp: ffff830127fdfc58 r8: >>> 0000000000000004 >>>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >>> 0000000000000010 >>>> (XEN) r12: ffff828401998000 r13: 00000000000001c1 r14: >>> 0000000000000200 >>>> (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: >>> 00000000000026f0 >>>> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 >>>> (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfc58: >>>> (XEN) 0000000900000001 0000000000000098 0000000000000200 >>> 0000000100000001 >>>> (XEN) ffff830127fdfcf8 ffff828c801a5c68 ffff830127fdfccc >>> ffff830127fdff28 >>>> (XEN) 0000000000000027 0000000000000000 ffff83011d9c4000 >>> 0000000000000000 >>>> (XEN) ffff830127fdfcf8 ffff828c8011383b 0100000400000009 >>> ffff830127fdff28 >>>> (XEN) 0000000000000006 0000000044803760 00000000448037c0 >>> 0000000000000000 >>>> (XEN) ffff830127fdff08 ffff828c80110591 0000000000000000 >>> 0000000000000000 >>>> (XEN) 0000000000000000 0000000000000000 0000000000000000 >>> 0000000000000000 >>>> (XEN) 0000000000000200 0000000000000001 0000000000000000 >>> 0000000000000001 >>>> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 >>> ffff828c80115811 >>>> (XEN) 0000000000000001 ffff828c80151777 ffff830127fdfda8 >>> ffff828c8013c559 >>>> (XEN) 0000000000000004 0000020000000001 ffff830127fdfdb8 >>> ffff828c8013c5f6 >>>> (XEN) ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 >>> ffff828c8011d73e >>>> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 >>> ffff83011d9c4000 >>>> (XEN) 0000000000000282 0000000400000009 0000000044803750 >>> ffff8300cfdfc030 >>>> (XEN) ffff830127ff1f28 0000000000000002 ffff830127fdfe58 >>> ffff828c80119cb7 >>>> (XEN) 00007cfed8020197 ffff828c80239180 0000000000000002 >>> ffff830127fdfe68 >>>> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 >>> ffff828c8014c173 >>>> (XEN) ffff830127fdfe98 ffff828c801d166d ffff830127fdfe98 >>> 00000000000cce00 >>>> (XEN) 000000000001f600 ffff828c801d2063 0000000044803750 >>> 0000000000000004 >>>> (XEN) 0000000000000009 000000000000000a 00002b820b4a5eb7 >>> ffff83011d9c4000 >>>> (XEN) Xen call trace: >>>> (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 >>>> (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b >>>> (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 >>>> (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 >>>> (XEN) >>>> (XEN) >>>> (XEN) **************************************** >>>> (XEN) Panic on CPU 1: >>>> (XEN) Xen BUG at page_alloc.c:409 >>>> (XEN) **************************************** >>>> (XEN) >>>> >>>> And here again running opensuse 2.6.27 kernel: >>>> >>>> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:536 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- >>>> (XEN) CPU: 1 >>>> (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>>> (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor >>>> (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: >>> 0000000000000001 >>>> (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: >>> ffff828c801fedec >>>> (XEN) rbp: ffff830127fdfe90 rsp: ffff830127fdfe40 r8: >>> 0000000000000004 >>>> (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: >>> 0000000000000010 >>>> (XEN) r12: ffff82840236e3e0 r13: 0000000000000000 r14: >>> 0000000000000000 >>>> (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: >>> 00000000000026f0 >>>> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 >>>> (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfe40: >>>> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 >>> 0000000000000001 >>>> (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 >>> ffff82840236e3c0 >>>> (XEN) ffff82840236e2e0 ffff830000000000 ffff830127fdfed0 >>> ffff828c8011329a >>>> (XEN) 000000985dfe4f8c 0000000000000001 ffff830127fdff28 >>> ffff828c80297880 >>>> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 >>> ffff828c8011ba21 >>>> (XEN) ffff8800e9be5d80 ffff830127fdff28 ffff828c802375b0 >>> ffff8300cee8a000 >>>> (XEN) ffff830127fdff20 ffff828c8013ca78 0000000000000001 >>> ffff8300cfaee000 >>>> (XEN) ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 >>> ffffffff8070f1c0 >>>> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 >>> 0000000000000246 >>>> (XEN) ffff8800c4df3d68 ffff8800eab76b00 0000000000000000 >>> 0000000000000000 >>>> (XEN) ffffffff802073aa 0000000000000009 00000000deadbeef >>> 00000000deadbeef >>>> (XEN) 0000010000000000 ffffffff802073aa 000000000000e033 >>> 0000000000000246 >>>> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef >>> fddffff4f3b9beef >>>> (XEN) 008488008022beef 0001000a0a03beef f7f5ff7b00000001 >>> ffff8300cfaee000 >>>> (XEN) Xen call trace: >>>> (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 >>>> (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c >>>> (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 >>>> (XEN) [<ffff828c8013ca78>] idle_loop+0x9d/0x9f >>>> (XEN) >>>> (XEN) >>>> (XEN) **************************************** >>>> (XEN) Panic on CPU 1: >>>> (XEN) Xen BUG at page_alloc.c:536 >>>> (XEN) **************************************** >>>> (XEN) >>>> (XEN) Reboot in five seconds... >>>> >>>> >>>> Andy >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>> >>> >>> > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
We tried our auto test suite (the detailed testing is followed, it is same as the Bi-weekly VMX status report sent by Haicheng), but still can't trigger the issue. Andrew Lyon, can you share more detailed info on how this issue is produced? Thanks Yunhong Jiang Test Environment: ===============================================================Platform : x86_64 Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga) Hardware : Nehalem Xen package: 19043:10a8fae412c5 Platform : PAE Service OS : Red Hat Enterprise Linux Server release 5.2 (Tikanga) Hardware : Nehalem Xen package: 19043:10a8fae412c5 Details: ====================================================================X86_64: Summary Test Report of Last Session ==================================================================== Total Pass Fail NoResult Crash ====================================================================vtd_ept_vpid 16 11 5 0 0 ras_ept_vpid 1 1 0 0 0 control_panel_ept_vpid 18 18 0 0 0 stubdom_ept_vpid 2 1 1 0 0 gtest_ept_vpid 22 22 0 0 0 acpi_ept_vpid 5 1 4 0 0 device_model_ept_vpid 2 2 0 0 0 ====================================================================vtd_ept_vpid 16 11 5 0 0 :two_dev_up_xp_nomsi_64_ 1 1 0 0 0 :two_dev_smp_nomsi_64_g3 1 1 0 0 0 :two_dev_scp_64_g32e 1 0 1 0 0 :lm_pcie_smp_64_g32e 1 0 1 0 0 :lm_pcie_up_64_g32e 1 0 1 0 0 :two_dev_up_64_g32e 1 0 1 0 0 :lm_pcie_up_xp_nomsi_64_ 1 1 0 0 0 :two_dev_up_nomsi_64_g32 1 1 0 0 0 :two_dev_smp_64_g32e 1 0 1 0 0 :lm_pci_up_xp_nomsi_64_g 1 1 0 0 0 :lm_pci_up_nomsi_64_g32e 1 1 0 0 0 :two_dev_smp_xp_nomsi_64 1 1 0 0 0 :two_dev_scp_nomsi_64_g3 1 1 0 0 0 :lm_pcie_smp_xp_nomsi_64 1 1 0 0 0 :lm_pci_smp_nomsi_64_g32 1 1 0 0 0 :lm_pci_smp_xp_nomsi_64_ 1 1 0 0 0 ras_ept_vpid 1 1 0 0 0 :cpu_online_offline_64_g 1 1 0 0 0 control_panel_ept_vpid 18 18 0 0 0 :XEN_1500M_guest_64_g32e 1 1 0 0 0 :XEN_LM_Continuity_64_g3 1 1 0 0 0 :XEN_256M_xenu_64_gPAE 1 1 0 0 0 :XEN_four_vmx_xenu_seq_6 1 1 0 0 0 :XEN_vmx_vcpu_pin_64_g32 1 1 0 0 0 :XEN_SR_Continuity_64_g3 1 1 0 0 0 :XEN_linux_win_64_g32e 1 1 0 0 0 :XEN_vmx_2vcpu_64_g32e 1 1 0 0 0 :XEN_1500M_guest_64_gPAE 1 1 0 0 0 :XEN_four_dguest_co_64_g 1 1 0 0 0 :XEN_two_winxp_64_g32e 1 1 0 0 0 :XEN_four_sguest_seq_64_ 1 1 0 0 0 :XEN_256M_guest_64_gPAE 1 1 0 0 0 :XEN_LM_SMP_64_g32e 1 1 0 0 0 :XEN_Nevada_xenu_64_g32e 1 1 0 0 0 :XEN_256M_guest_64_g32e 1 1 0 0 0 :XEN_SR_SMP_64_g32e 1 1 0 0 0 :XEN_four_sguest_seq_64_ 1 1 0 0 0 stubdom_ept_vpid 2 1 1 0 0 :boot_stubdom_no_qcow_64 1 1 0 0 0 :boot_stubdom_qcow_64_g3 1 0 1 0 0 gtest_ept_vpid 22 22 0 0 0 :boot_up_acpi_win2k_64_g 1 1 0 0 0 :boot_up_noacpi_win2k_64 1 1 0 0 0 :reboot_xp_64_g32e 1 1 0 0 0 :boot_solaris10u5_64_g32 1 1 0 0 0 :boot_up_vista_64_g32e 1 1 0 0 0 :boot_indiana_64_g32e 1 1 0 0 0 :boot_up_acpi_xp_64_g32e 1 1 0 0 0 :boot_smp_acpi_xp_64_g32 1 1 0 0 0 :boot_up_acpi_64_g32e 1 1 0 0 0 :boot_base_kernel_64_g32 1 1 0 0 0 :boot_up_win2008_64_g32e 1 1 0 0 0 :kb_nightly_64_g32e 1 1 0 0 0 :boot_up_acpi_win2k3_64_ 1 1 0 0 0 :boot_nevada_64_g32e 1 1 0 0 0 :boot_smp_vista_64_g32e 1 1 0 0 0 :ltp_nightly_64_g32e 1 1 0 0 0 :boot_fc9_64_g32e 1 1 0 0 0 :boot_smp_win2008_64_g32 1 1 0 0 0 :boot_smp_acpi_win2k3_64 1 1 0 0 0 :boot_rhel5u1_64_g32e 1 1 0 0 0 :reboot_fc6_64_g32e 1 1 0 0 0 :boot_smp_acpi_win2k_64_ 1 1 0 0 0 acpi_ept_vpid 5 1 4 0 0 :monitor_c_status_64_g32 1 0 1 0 0 :check_t_control_64_g32e 1 0 1 0 0 :hvm_s3_sr_64_g32e 1 0 1 0 0 :hvm_s3_smp_64_g32e 1 0 1 0 0 :monitor_p_status_64_g32 1 1 0 0 0 device_model_ept_vpid 2 2 0 0 0 :pv_on_up_64_g32e 1 1 0 0 0 :pv_on_smp_64_g32e 1 1 0 0 0 ====================================================================Total 66 56 10 0 0 32PAE: Summary Test Report of Last Session ==================================================================== Total Pass Fail NoResult Crash ====================================================================vtd_ept_vpid 16 11 5 0 0 ras_ept_vpid 1 1 0 0 0 control_panel_ept_vpid 14 14 0 0 0 stubdom_ept_vpid 2 1 1 0 0 gtest_ept_vpid 24 24 0 0 0 device_model_ept_vpid 2 0 0 2 0 ====================================================================vtd_ept_vpid 16 11 5 0 0 :lm_pcie_smp_xp_nomsi_PA 1 1 0 0 0 :lm_pci_up_xp_nomsi_PAE_ 1 1 0 0 0 :lm_pci_up_nomsi_PAE_gPA 1 1 0 0 0 :two_dev_scp_nomsi_PAE_g 1 1 0 0 0 :lm_pcie_up_xp_nomsi_PAE 1 1 0 0 0 :lm_pci_smp_xp_nomsi_PAE 1 1 0 0 0 :two_dev_up_PAE_gPAE 1 0 1 0 0 :two_dev_up_xp_nomsi_PAE 1 1 0 0 0 :lm_pcie_smp_PAE_gPAE 1 0 1 0 0 :two_dev_smp_xp_nomsi_PA 1 1 0 0 0 :two_dev_smp_PAE_gPAE 1 0 1 0 0 :two_dev_smp_nomsi_PAE_g 1 1 0 0 0 :two_dev_up_nomsi_PAE_gP 1 1 0 0 0 :two_dev_scp_PAE_gPAE 1 0 1 0 0 :lm_pcie_up_PAE_gPAE 1 0 1 0 0 :lm_pci_smp_nomsi_PAE_gP 1 1 0 0 0 ras_ept_vpid 1 1 0 0 0 :cpu_online_offline_PAE_ 1 1 0 0 0 control_panel_ept_vpid 14 14 0 0 0 :XEN_four_vmx_xenu_seq_P 1 1 0 0 0 :XEN_four_dguest_co_PAE_ 1 1 0 0 0 :XEN_SR_SMP_PAE_gPAE 1 1 0 0 0 :XEN_linux_win_PAE_gPAE 1 1 0 0 0 :XEN_Nevada_xenu_PAE_gPA 1 1 0 0 0 :XEN_LM_SMP_PAE_gPAE 1 1 0 0 0 :XEN_SR_Continuity_PAE_g 1 1 0 0 0 :XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0 :XEN_LM_Continuity_PAE_g 1 1 0 0 0 :XEN_256M_guest_PAE_gPAE 1 1 0 0 0 :XEN_1500M_guest_PAE_gPA 1 1 0 0 0 :XEN_two_winxp_PAE_gPAE 1 1 0 0 0 :XEN_four_sguest_seq_PAE 1 1 0 0 0 :XEN_vmx_2vcpu_PAE_gPAE 1 1 0 0 0 stubdom_ept_vpid 2 1 1 0 0 :boot_stubdom_no_qcow_PA 1 1 0 0 0 :boot_stubdom_qcow_PAE_g 1 0 1 0 0 gtest_ept_vpid 24 24 0 0 0 :boot_up_acpi_PAE_gPAE 1 1 0 0 0 :ltp_nightly_PAE_gPAE 1 1 0 0 0 :reboot_xp_PAE_gPAE 1 1 0 0 0 :boot_up_acpi_xp_PAE_gPA 1 1 0 0 0 :boot_up_vista_PAE_gPAE 1 1 0 0 0 :boot_up_acpi_win2k3_PAE 1 1 0 0 0 :boot_smp_acpi_win2k3_PA 1 1 0 0 0 :boot_smp_acpi_win2k_PAE 1 1 0 0 0 :boot_up_acpi_win2k_PAE_ 1 1 0 0 0 :boot_smp_acpi_xp_PAE_gP 1 1 0 0 0 :boot_up_noacpi_win2k_PA 1 1 0 0 0 :boot_smp_vista_PAE_gPAE 1 1 0 0 0 :boot_up_noacpi_win2k3_P 1 1 0 0 0 :boot_nevada_PAE_gPAE 1 1 0 0 0 :boot_solaris10u5_PAE_gP 1 1 0 0 0 :boot_indiana_PAE_gPAE 1 1 0 0 0 :boot_rhel5u1_PAE_gPAE 1 1 0 0 0 :boot_base_kernel_PAE_gP 1 1 0 0 0 :boot_up_win2008_PAE_gPA 1 1 0 0 0 :boot_up_noacpi_xp_PAE_g 1 1 0 0 0 :boot_smp_win2008_PAE_gP 1 1 0 0 0 :reboot_fc6_PAE_gPAE 1 1 0 0 0 :boot_fc10_PAE_gPAE 1 1 0 0 0 :kb_nightly_PAE_gPAE 1 1 0 0 0 device_model_ept_vpid 2 0 0 2 0 :pv_on_up_PAE_gPAE 1 0 0 1 0 :pv_on_smp_PAE_gPAE 1 0 0 1 0 ====================================================================Total 59 51 6 2 0 Keir Fraser <mailto:keir.fraser@eu.citrix.com> wrote:> If you can reproduce this bug, it's worth trying to revert c/s 19285 and try > again: > hg export 19285 | patch -Rp1 > To put the tree back into clean state afterwards: > hg diff | patch -Rp1 > > If the bug still reproduces another possible culprit is c/s 19317. > > -- Keir > > On 13/03/2009 12:55, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote: > >> Thanks for the log, seems the count info is -1UL in such situation, I >> think it may because some change to count_info, and I will try to check it. >> >> Thanks >> Yunhong Jiang >> >>> -----Original Message----- >>> From: Keir Fraser [mailto:keir.fraser@eu.citrix.com] >>> Sent: 2009年3月13日 18:39 >>> To: Andrew Lyon; Xen-devel >>> Cc: Jiang, Yunhong >>> Subject: Re: [Xen-devel] Xen unstable crash >>> >>> Thanks. Our testing has showed this up too. The cause hasn't been tracked >>> down yet unfortunately. >>> >>> -- Keir >>> >>> On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >>> >>>> Hi, >>>> >>>> Running Xen unstable on a Dell Optiplex 755, after starting and >>>> shutting down a few hvm's the system crashes with the following message: >>>> >>>> This is with Xensource 2.6.18.8 kernel: >>>> >>>> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:409 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) >>>> CPU: 1 (XEN) RIP: e008:[<ffff828c8011206f>] >>>> alloc_heap_pages+0x35a/0x486 (XEN) RFLAGS: 0000000000010286 CONTEXT: >>>> hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx: >>>> 0000000000000001 (XEN) rdx: 000000000000000a rsi: 000000000000000a >>>> rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfcb8 rsp: >>>> ffff830127fdfc58 r8: 0000000000000004 (XEN) r9: 0000000000000004 >>>> r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: >>>> ffff828401998000 r13: 00000000000001c1 r14: 0000000000000200 (XEN) >>>> r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000026f0 >>>> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 (XEN) ds: 0000 es: >>>> 0000 fs: 0063 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfc58: >>>> (XEN) 0000000900000001 0000000000000098 0000000000000200 >>>> 0000000100000001 (XEN) ffff830127fdfcf8 ffff828c801a5c68 >>>> ffff830127fdfccc ffff830127fdff28 (XEN) 0000000000000027 >>>> 0000000000000000 ffff83011d9c4000 0000000000000000 (XEN) >>>> ffff830127fdfcf8 ffff828c8011383b 0100000400000009 ffff830127fdff28 >>>> (XEN) 0000000000000006 0000000044803760 00000000448037c0 >>>> 0000000000000000 (XEN) ffff830127fdff08 ffff828c80110591 >>>> 0000000000000000 0000000000000000 (XEN) 0000000000000000 >>>> 0000000000000000 0000000000000000 0000000000000000 (XEN) >>>> 0000000000000200 0000000000000001 0000000000000000 0000000000000001 >>>> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8 >>>> ffff828c80115811 (XEN) 0000000000000001 ffff828c80151777 >>>> ffff830127fdfda8 ffff828c8013c559 (XEN) 0000000000000004 >>>> 0000020000000001 ffff830127fdfdb8 ffff828c8013c5f6 (XEN) >>>> ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 ffff828c8011d73e >>>> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28 >>>> ffff83011d9c4000 (XEN) 0000000000000282 0000000400000009 >>>> 0000000044803750 ffff8300cfdfc030 (XEN) ffff830127ff1f28 >>>> 0000000000000002 ffff830127fdfe58 ffff828c80119cb7 (XEN) >>>> 00007cfed8020197 ffff828c80239180 0000000000000002 ffff830127fdfe68 >>>> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78 >>>> ffff828c8014c173 (XEN) ffff830127fdfe98 ffff828c801d166d >>>> ffff830127fdfe98 00000000000cce00 (XEN) 000000000001f600 >>>> ffff828c801d2063 0000000044803750 0000000000000004 (XEN) >>>> 0000000000000009 000000000000000a 00002b820b4a5eb7 ffff83011d9c4000 >>>> (XEN) Xen call trace: (XEN) [<ffff828c8011206f>] >>>> alloc_heap_pages+0x35a/0x486 (XEN) [<ffff828c8011383b>] >>>> alloc_domheap_pages+0x128/0x17b (XEN) [<ffff828c80110591>] >>>> do_memory_op+0x988/0x17a7 (XEN) [<ffff828c801cf1bf>] >>>> syscall_enter+0xef/0x149 (XEN) (XEN) (XEN) >>>> **************************************** (XEN) Panic on CPU 1: (XEN) Xen >>>> BUG at page_alloc.c:409 (XEN) **************************************** >>>> (XEN) >>>> >>>> And here again running opensuse 2.6.27 kernel: >>>> >>>> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff >>>> (XEN) Xen BUG at page_alloc.c:536 >>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) >>>> CPU: 1 (XEN) RIP: e008:[<ffff828c80112d77>] >>>> free_heap_pages+0x12f/0x4b8 (XEN) RFLAGS: 0000000000010206 CONTEXT: >>>> hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx: >>>> 0000000000000001 (XEN) rdx: ffffffffffffffff rsi: 000000000000000a >>>> rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfe90 rsp: >>>> ffff830127fdfe40 r8: 0000000000000004 (XEN) r9: 0000000000000004 >>>> r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: >>>> ffff82840236e3e0 r13: 0000000000000000 r14: 0000000000000000 (XEN) >>>> r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 >>>> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 (XEN) ds: 0000 es: >>>> 0000 fs: 0000 gs: 0000 ss: e010 > cs: e008 >>>> (XEN) Xen stack trace from rsp=ffff830127fdfe40: >>>> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90 >>>> 0000000000000001 (XEN) 0000000000000000 0000000000000200 >>>> c2c2c2c2c2c2c2c2 ffff82840236e3c0 (XEN) ffff82840236e2e0 >>>> ffff830000000000 ffff830127fdfed0 ffff828c8011329a (XEN) >>>> 000000985dfe4f8c 0000000000000001 ffff830127fdff28 ffff828c80297880 >>>> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00 >>>> ffff828c8011ba21 (XEN) ffff8800e9be5d80 ffff830127fdff28 >>>> ffff828c802375b0 ffff8300cee8a000 (XEN) ffff830127fdff20 >>>> ffff828c8013ca78 0000000000000001 ffff8300cfaee000 (XEN) >>>> ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 ffffffff8070f1c0 >>>> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184 >>>> 0000000000000246 (XEN) ffff8800c4df3d68 ffff8800eab76b00 >>>> 0000000000000000 0000000000000000 (XEN) ffffffff802073aa >>>> 0000000000000009 00000000deadbeef 00000000deadbeef (XEN) >>>> 0000010000000000 ffffffff802073aa 000000000000e033 0000000000000246 >>>> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef >>>> fddffff4f3b9beef (XEN) 008488008022beef 0001000a0a03beef >>>> f7f5ff7b00000001 ffff8300cfaee000 (XEN) Xen call trace: (XEN) >>>> [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) >>>> [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c (XEN) >>>> [<ffff828c8011ba21>] do_softirq+0x6a/0x77 (XEN) [<ffff828c8013ca78>] >>>> idle_loop+0x9d/0x9f (XEN) (XEN) (XEN) >>>> **************************************** (XEN) Panic on CPU 1: (XEN) Xen >>>> BUG at page_alloc.c:536 (XEN) **************************************** >>>> (XEN) >>>> (XEN) Reboot in five seconds... >>>> >>>> >>>> Andy >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2009/3/17 Jiang, Yunhong <yunhong.jiang@intel.com>:> We tried our auto test suite (the detailed testing is followed, it is same as the Bi-weekly VMX status report sent by Haicheng), but still can''t trigger the issue. Andrew Lyon, can you share more detailed info on how this issue is produced?I will do some testing now. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 9:35 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote:> 2009/3/17 Jiang, Yunhong <yunhong.jiang@intel.com>: >> We tried our auto test suite (the detailed testing is followed, it is same as the Bi-weekly VMX status report sent by Haicheng), but still can''t trigger the issue. Andrew Lyon, can you share more detailed info on how this issue is produced? > > I will do some testing now. > > Andy >I booted Xen unstable with kernel 2.6.18.8 64 bit, started a 32 bit windows xp hvm, a 32 bit vista hvm, and finally a 64 bit windows vista hvm, as the 64 bit was booting up the system crashes with this message: (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff (XEN) Xen BUG at page_alloc.c:536 (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 1 (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840235bc00 rcx: 0000000000000001 (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfea8 rsp: ffff830127fdfe58 r8: 0000000000000004 (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: ffff82840235bc00 r13: 0000000000000000 r14: 0000000000000000 (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 (XEN) cr3: 000000011de97000 cr2: 0000000080d30000 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008 (XEN) Xen stack trace from rsp=ffff830127fdfe58: (XEN) ffff828c8011c0c8 ffff82840235bc00 0000000127fdfe98 0000000000000001 (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 0000000000000000 (XEN) 0000000000000000 ffff830000000000 ffff830127fdfee8 ffff828c8011329a (XEN) 00000042e383c411 0000000000000001 ffff830127fdff28 ffff828c80297880 (XEN) 0000000000619e40 0000000000619e90 ffff830127fdff18 ffff828c8011ba21 (XEN) ffff8300cef8a000 ffff8300cef8a000 0000000000619e40 000000000061a710 (XEN) 00007cfed80200b7 ffff828c801cf296 0000000000619e90 0000000000619e40 (XEN) 000000000061a710 0000000000619e40 00007fff0032bee0 0000000000000000 (XEN) 0000000000000246 0000000000000000 0000000000000000 00002b65aaf1aae0 (XEN) 000000000061a6c0 0000000000619e40 00000000e814ec72 000000000040df76 (XEN) 0000000000000000 000000f900000000 0000000000407e68 000000000000e033 (XEN) 0000000000000206 00007fff0032beb0 000000000000e02b 7f466d7b7f79beef (XEN) fdb7dbb473b9beef 0084a8008022beef 0005020b1a03beef f5f5ff7b00000001 (XEN) ffff8300cef8a000 (XEN) Xen call trace: (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Xen BUG at page_alloc.c:536 (XEN) **************************************** The problem is not predictable, in one instance it happened when starting the first hvm, other times I''ve been able to start and shutdown several before it happened. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 9:50 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote:> On Tue, Mar 17, 2009 at 9:35 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote: >> 2009/3/17 Jiang, Yunhong <yunhong.jiang@intel.com>: >>> We tried our auto test suite (the detailed testing is followed, it is same as the Bi-weekly VMX status report sent by Haicheng), but still can''t trigger the issue. Andrew Lyon, can you share more detailed info on how this issue is produced? >> >> I will do some testing now. >> >> Andy >> > > I booted Xen unstable with kernel 2.6.18.8 64 bit, started a 32 bit > windows xp hvm, a 32 bit vista hvm, and finally a 64 bit windows vista > hvm, as the 64 bit was booting up the system crashes with this > message: > > (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff > (XEN) Xen BUG at page_alloc.c:536 > (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- > (XEN) CPU: 1 > (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 > (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor > (XEN) rax: 0000000000000000 rbx: ffff82840235bc00 rcx: 0000000000000001 > (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: ffff828c801fedec > (XEN) rbp: ffff830127fdfea8 rsp: ffff830127fdfe58 r8: 0000000000000004 > (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 > (XEN) r12: ffff82840235bc00 r13: 0000000000000000 r14: 0000000000000000 > (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 > (XEN) cr3: 000000011de97000 cr2: 0000000080d30000 > (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008 > (XEN) Xen stack trace from rsp=ffff830127fdfe58: > (XEN) ffff828c8011c0c8 ffff82840235bc00 0000000127fdfe98 0000000000000001 > (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 0000000000000000 > (XEN) 0000000000000000 ffff830000000000 ffff830127fdfee8 ffff828c8011329a > (XEN) 00000042e383c411 0000000000000001 ffff830127fdff28 ffff828c80297880 > (XEN) 0000000000619e40 0000000000619e90 ffff830127fdff18 ffff828c8011ba21 > (XEN) ffff8300cef8a000 ffff8300cef8a000 0000000000619e40 000000000061a710 > (XEN) 00007cfed80200b7 ffff828c801cf296 0000000000619e90 0000000000619e40 > (XEN) 000000000061a710 0000000000619e40 00007fff0032bee0 0000000000000000 > (XEN) 0000000000000246 0000000000000000 0000000000000000 00002b65aaf1aae0 > (XEN) 000000000061a6c0 0000000000619e40 00000000e814ec72 000000000040df76 > (XEN) 0000000000000000 000000f900000000 0000000000407e68 000000000000e033 > (XEN) 0000000000000206 00007fff0032beb0 000000000000e02b 7f466d7b7f79beef > (XEN) fdb7dbb473b9beef 0084a8008022beef 0005020b1a03beef f5f5ff7b00000001 > (XEN) ffff8300cef8a000 > (XEN) Xen call trace: > (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 > (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c > (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 > (XEN) > (XEN) > (XEN) **************************************** > (XEN) Panic on CPU 1: > (XEN) Xen BUG at page_alloc.c:536 > (XEN) **************************************** > > The problem is not predictable, in one instance it happened when > starting the first hvm, other times I''ve been able to start and > shutdown several before it happened. > > Andy >This time I was able to start all 3 of the vm''s I mentioned, they are setup to automatically boot up, run a chkdsk , then shutdown, , the two vista vm''s completed the cycle and I was able to start them again before the error happened, this time it triggered just as one of them was shutting down: (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff (XEN) Xen BUG at page_alloc.c:536 (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 1 (XEN) RIP: e008:[<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) RFLAGS: 0000000000010206 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff828400090dc0 rcx: 0000000000000001 (XEN) rdx: ffffffffffffffff rsi: 000000000000000a rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfea8 rsp: ffff830127fdfe58 r8: 0000000000000004 (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: ffff828400090dc0 r13: 0000000000000000 r14: 0000000000000000 (XEN) r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0 (XEN) cr3: 000000011d884000 cr2: 0000000001076090 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: e008 (XEN) Xen stack trace from rsp=ffff830127fdfe58: (XEN) c2c2c2c2c2c2c2c2 ffff828400090dc0 00000001000917c0 0000000000000001 (XEN) 0000000000000000 0000000000000200 c2c2c2c2c2c2c2c2 ffff828400090da0 (XEN) ffff828400090cc0 ffff830000000000 ffff830127fdfee8 ffff828c8011329a (XEN) 000000962fa5971f 0000000000000001 ffff830127fdff28 ffff828c80297880 (XEN) 0000000000000000 000000962c2d2bed ffff830127fdff18 ffff828c8011ba21 (XEN) 0000000000000000 ffff8300cfdfc000 0000000000000000 ffff88000513d620 (XEN) 00007cfed80200b7 ffff828c801cf296 000000962c2d2bed 0000000000000000 (XEN) ffff88000513d620 0000000000000000 ffffffff8062be88 0000000000020800 (XEN) 0000000000000246 0000000000000000 ffff880037fe5c98 0000000000000000 (XEN) 0000000000000000 00000000000003e8 ffffffffff578000 ffffffff8054d3a0 (XEN) ffff88000513d620 000000f900000000 ffffffff804b6d2f 000000000000e033 (XEN) 0000000000000286 ffffffff8062be88 000000000000e02b 7f466d7b7f79beef (XEN) fdb7dbb473b9beef 0084a8008022beef 0005020b1a03beef f5f5ff7b00000001 (XEN) ffff8300cfdfc000 (XEN) Xen call trace: (XEN) [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN) [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c (XEN) [<ffff828c8011ba21>] do_softirq+0x6a/0x77 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Xen BUG at page_alloc.c:536 (XEN) **************************************** (XEN) (XEN) Reboot in five seconds... _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 10:00, "Andrew Lyon" <andrew.lyon@gmail.com> wrote:> This time I was able to start all 3 of the vm''s I mentioned, they are > setup to automatically boot up, run a chkdsk , then shutdown, , the > two vista vm''s completed the cycle and I was able to start them again > before the error happened, this time it triggered just as one of them > was shutting down:Can you try reverting some suspicious changesets? hg export 19317 | patch -Rp1 hg export 19285 | patch -Rp1 Then re-build the hypervisor (no need to redo tools or kernel). To put the tree back in a clean state afterwards: hg diff | patch -Rp1 Thanks, Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 10:10 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 17/03/2009 10:00, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: > >> This time I was able to start all 3 of the vm''s I mentioned, they are >> setup to automatically boot up, run a chkdsk , then shutdown, , the >> two vista vm''s completed the cycle and I was able to start them again >> before the error happened, this time it triggered just as one of them >> was shutting down: > > Can you try reverting some suspicious changesets? > hg export 19317 | patch -Rp1 > hg export 19285 | patch -Rp1 > Then re-build the hypervisor (no need to redo tools or kernel). To put the > tree back in a clean state afterwards: > hg diff | patch -Rp1 > > Thanks, > Keir > > >Building now... I noticed that both of those changesets touch multi.c, and I happened to notice this message was displayed before one of the crashes: (XEN) multi.c:3348:d12 write to pagetable during event injection: cr2=0x80392d74, mfn=0xb58bb Perhaps related? Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 10:20 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote:> On Tue, Mar 17, 2009 at 10:10 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote: >> On 17/03/2009 10:00, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >> >>> This time I was able to start all 3 of the vm''s I mentioned, they are >>> setup to automatically boot up, run a chkdsk , then shutdown, , the >>> two vista vm''s completed the cycle and I was able to start them again >>> before the error happened, this time it triggered just as one of them >>> was shutting down: >> >> Can you try reverting some suspicious changesets? >> hg export 19317 | patch -Rp1 >> hg export 19285 | patch -Rp1 >> Then re-build the hypervisor (no need to redo tools or kernel). To put the >> tree back in a clean state afterwards: >> hg diff | patch -Rp1 >> >> Thanks, >> Keir >> >> >> > > Building now... I noticed that both of those changesets touch multi.c, > and I happened to notice this message was displayed before one of the > crashes: > > (XEN) multi.c:3348:d12 write to pagetable during event injection: > cr2=0x80392d74, mfn=0xb58bb > > Perhaps related? > > Andy >Reverting those two changesets did not stop the problem, I didn''t get the crash message this time because the scrollback buffer in my serial console had been overwritten before I got a change to copy it, but it looked the same as the others. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 10:37 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote:> On Tue, Mar 17, 2009 at 10:20 AM, Andrew Lyon <andrew.lyon@gmail.com> wrote: >> On Tue, Mar 17, 2009 at 10:10 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote: >>> On 17/03/2009 10:00, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >>> >>>> This time I was able to start all 3 of the vm''s I mentioned, they are >>>> setup to automatically boot up, run a chkdsk , then shutdown, , the >>>> two vista vm''s completed the cycle and I was able to start them again >>>> before the error happened, this time it triggered just as one of them >>>> was shutting down: >>> >>> Can you try reverting some suspicious changesets? >>> hg export 19317 | patch -Rp1 >>> hg export 19285 | patch -Rp1 >>> Then re-build the hypervisor (no need to redo tools or kernel). To put the >>> tree back in a clean state afterwards: >>> hg diff | patch -Rp1 >>> >>> Thanks, >>> Keir >>> >>> >>> >> >> Building now... I noticed that both of those changesets touch multi.c, >> and I happened to notice this message was displayed before one of the >> crashes: >> >> (XEN) multi.c:3348:d12 write to pagetable during event injection: >> cr2=0x80392d74, mfn=0xb58bb >> >> Perhaps related? >> >> Andy >> > > Reverting those two changesets did not stop the problem, I didn''t get > the crash message this time because the scrollback buffer in my serial > console had been overwritten before I got a change to copy it, but it > looked the same as the others. > > Andy >here is a crash with the two changesets reverted: (XEN) ** page_alloc.c:407 -- 279/512 ffffffffffffffff (XEN) Xen BUG at page_alloc.c:409 (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN) CPU: 0 (XEN) RIP: e008:[<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor (XEN) rax: 0000000000000000 rbx: ffff8284014422e0 rcx: 0000000000000001 (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff828c801fec6c (XEN) rbp: ffff828c8027fcb8 rsp: ffff828c8027fc58 r8: 0000000000000004 (XEN) r9: 0000000000000004 r10: 0000000000000010 r11: 0000000000000010 (XEN) r12: ffff828401440000 r13: 0000000000000117 r14: 0000000000000200 (XEN) r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000026f0 (XEN) cr3: 000000011d9e6000 cr2: 000000000142f000 (XEN) ds: 0000 es: 0000 fs: 0063 gs: 0000 ss: e010 cs: e008 (XEN) Xen stack trace from rsp=ffff828c8027fc58: (XEN) 0000000900000001 0000000000000098 0000000000000200 0000000100000001 (XEN) ffff828c8027fcf8 ffff828c801a5c6d ffff828c8027fccc ffff828c8027ff28 (XEN) 0000000000000027 0000000000000000 ffff830102176000 0000000000000000 (XEN) ffff828c8027fcf8 ffff828c8011383b 0100000400000009 ffff828c8027ff28 (XEN) 0000000000000006 0000000044803758 00000000448037c0 0000000000000000 (XEN) ffff828c8027ff08 ffff828c80110591 0000000000000000 0000000000000000 (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000 (XEN) 0000000000000200 0000000000000000 0000000000000000 ffff830127fcc000 (XEN) 0000000000800167 000000000004ee7f 800000004ee7f167 ffff8284009dcfe0 (XEN) ffff828c8027fd88 ffff828c80151777 ffff828c8027fda8 ffff828c8013c559 (XEN) 0000000000000004 0000020000000001 ffff828c8027fdb8 ffff828c8013c5f6 (XEN) ffff828c8027fde8 ffff828c80107247 ffff828c8027fde8 ffff828c8011d72e (XEN) 0000000000000000 0000000000000000 ffff828c8027fe28 ffff830102176000 (XEN) 0000000000000282 0000000400000009 0000000044803750 ffff8300cfdfc030 (XEN) ffff83004f6bc018 ffff83004f6bc010 0000000000000001 ffff828c80119cb7 (XEN) 0000000000000082 ffff828c80237180 0000000000000002 ffff828c8027fe68 (XEN) ffff828c8011c0c8 ffff828c80237180 ffff828c8027fe78 ffff828c8014c173 (XEN) ffff828c8027fe98 ffff828c801d166d ffff828c8027fe98 00000000000a2200 (XEN) 0000000000007c00 ffff828c801d2063 0000000044803750 0000000000000004 (XEN) 0000000000000009 0000000000000004 00002af8d03b0eb7 ffff830102176000 (XEN) Xen call trace: (XEN) [<ffff828c8011206f>] alloc_heap_pages+0x35a/0x486 (XEN) [<ffff828c8011383b>] alloc_domheap_pages+0x128/0x17b (XEN) [<ffff828c80110591>] do_memory_op+0x988/0x17a7 (XEN) [<ffff828c801cf1bf>] syscall_enter+0xef/0x149 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 0: (XEN) Xen BUG at page_alloc.c:409 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 10:37, "Andrew Lyon" <andrew.lyon@gmail.com> wrote:>> Building now... I noticed that both of those changesets touch multi.c, >> and I happened to notice this message was displayed before one of the >> crashes: >> >> (XEN) multi.c:3348:d12 write to pagetable during event injection: >> cr2=0x80392d74, mfn=0xb58bb >> > Reverting those two changesets did not stop the problem, I didn''t get > the crash message this time because the scrollback buffer in my serial > console had been overwritten before I got a change to copy it, but it > looked the same as the others.Hmm.. I''ll have to draw up a new hit list of suspicious changesets. The difficulty of bisecting a problem when we have two separate repositories uinvolved (xen-unsatble and the qemu repo) is something we may have to consider after 3.4 is out. It''s worked okay so far but it does rather break down when the bugs hit the fan! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 11:43, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:>> Reverting those two changesets did not stop the problem, I didn''t get >> the crash message this time because the scrollback buffer in my serial >> console had been overwritten before I got a change to copy it, but it >> looked the same as the others. > > Hmm.. I''ll have to draw up a new hit list of suspicious changesets. The > difficulty of bisecting a problem when we have two separate repositories > uinvolved (xen-unsatble and the qemu repo) is something we may have to > consider after 3.4 is out. It''s worked okay so far but it does rather break > down when the bugs hit the fan!Hi Andrew, My new most likely culprit is one of my own changesets, 19268. Can you try: hg export 19268 | patch -Rp1 Please? You''ll have to hit return on a few patch warnings -- they are failures in ia64-specific code so they are harmless to ignore. Again, then build and install only the hypervisor binary itself. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 13:13, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:> Hi Andrew, > > My new most likely culprit is one of my own changesets, 19268. Can you try: > hg export 19268 | patch -Rp1 > Please? You''ll have to hit return on a few patch warnings -- they are > failures in ia64-specific code so they are harmless to ignore. Again, then > build and install only the hypervisor binary itself.Oops, the result of the above does not build. Please instead apply the attached patch. This is a hacked-up manual reversion of the important parts of c/s 19268. Apply it with ''patch -p1''. Thanks, Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 13:19, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:>> My new most likely culprit is one of my own changesets, 19268. Can you try: >> hg export 19268 | patch -Rp1 >> Please? You''ll have to hit return on a few patch warnings -- they are >> failures in ia64-specific code so they are harmless to ignore. Again, then >> build and install only the hypervisor binary itself. > > Oops, the result of the above does not build. Please instead apply the > attached patch. This is a hacked-up manual reversion of the important parts > of c/s 19268. Apply it with ''patch -p1''.Actually I think I found the bug now, in which case it is fixed by c/s 19374. So please just pull latest xen-unstable and try that. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 3:42 PM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 17/03/2009 13:19, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote: > >>> My new most likely culprit is one of my own changesets, 19268. Can you try: >>> hg export 19268 | patch -Rp1 >>> Please? You''ll have to hit return on a few patch warnings -- they are >>> failures in ia64-specific code so they are harmless to ignore. Again, then >>> build and install only the hypervisor binary itself. >> >> Oops, the result of the above does not build. Please instead apply the >> attached patch. This is a hacked-up manual reversion of the important parts >> of c/s 19268. Apply it with ''patch -p1''. > > Actually I think I found the bug now, in which case it is fixed by c/s > 19374. So please just pull latest xen-unstable and try that. > > -- Keir > > >Would your revert patch also have fixed the bug? because so far I''ve not had a crash... Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 16:08, "Andrew Lyon" <andrew.lyon@gmail.com> wrote:>> Actually I think I found the bug now, in which case it is fixed by c/s >> 19374. So please just pull latest xen-unstable and try that. > > Would your revert patch also have fixed the bug? because so far I''ve > not had a crash...Yes, it was in the changeset that I asked you to revert. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, Mar 17, 2009 at 4:14 PM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> On 17/03/2009 16:08, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: > >>> Actually I think I found the bug now, in which case it is fixed by c/s >>> 19374. So please just pull latest xen-unstable and try that. >> >> Would your revert patch also have fixed the bug? because so far I''ve >> not had a crash... > > Yes, it was in the changeset that I asked you to revert. > > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >Ok, reverting the changeset seems to have fixed the problem, previously I could rarely get into double figures domain numbers before hitting the crash, I''ve just started domain id 55. I will revert the revet patch and pull 19374 tomorrow, and I will confirm that it has fixed the crash. Andy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 17/03/2009 16:29, "Andrew Lyon" <andrew.lyon@gmail.com> wrote:> Ok, reverting the changeset seems to have fixed the problem, > previously I could rarely get into double figures domain numbers > before hitting the crash, I''ve just started domain id 55. > > I will revert the revet patch and pull 19374 tomorrow, and I will > confirm that it has fixed the crash.Thanks for your efforts on this one. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sorry I didn''t check mail last night and really thank to fix this. Yunhong Jiang Andrew Lyon <mailto:andrew.lyon@gmail.com> wrote:> On Tue, Mar 17, 2009 at 4:14 PM, Keir Fraser > <keir.fraser@eu.citrix.com> wrote: >> On 17/03/2009 16:08, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >> >>>> Actually I think I found the bug now, in which case it is fixed by c/s >>>> 19374. So please just pull latest xen-unstable and try that. >>> >>> Would your revert patch also have fixed the bug? because so far I''ve >>> not had a crash... >> >> Yes, it was in the changeset that I asked you to revert. >> >> -- Keir >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> > > Ok, reverting the changeset seems to have fixed the problem, > previously I could rarely get into double figures domain numbers > before hitting the crash, I''ve just started domain id 55. > > I will revert the revet patch and pull 19374 tomorrow, and I will > confirm that it has fixed the crash. > > Andy_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Boris Derzhavets
2009-Mar-18 03:52 UTC
[Xen-devel] Failure to build fs-backend with the most recent Xen Unstable ( revesion 19374)
make -C fs-back install make[3]: Entering directory `/usr/src/xen-unstable.hg/tools/fs-back'' gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-xenbus.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -c -o fs-xenbus.o fs-xenbus.c gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-ops.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -c -o fs-ops.o fs-ops.c gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-backend.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -o fs-backend fs-xenbus.o fs-ops.o -L. -L.. -L../lib -L../../tools/libxc -lxenctrl -L../../tools/xenstore -lxenstore -lrt fs-backend.c cc1: warnings being treated as errors fs-backend.c: In function ‘aio_signal_handler’: fs-backend.c:382: error: ignoring return value of ‘write’, declared with attribute warn_unused_result make[3]: *** [fs-backend] Error 1 make[3]: Leaving directory `/usr/src/xen-unstable.hg/tools/fs-back'' make[2]: *** [subdir-install-fs-back] Error 2 make[2]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make: *** [install-tools] Error 2 Boris. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Boris Derzhavets
2009-Mar-18 06:48 UTC
Re: [Xen-devel] Failure to build fs-backend with the most recent Xen Unstable ( revesion 19374)
Just to be able to compile fixed fs-backend.c :- static void aio_signal_handler(int signo, siginfo_t *info, void *context) { int ret; struct fs_request *request = (struct fs_request*) info->si_value.sival_ptr; int saved_errno = errno; ret = write(pipefds[1], &request, sizeof(struct fs_request *)); errno = saved_errno; } It brings to next error seems unrelated to previous one :- The error log from compiling the libSDL test is: /tmp/qemu-conf--8065-.c:1:17: error: SDL.h: No such file or directory /tmp/qemu-conf--8065-.c: In function ‘main’: /tmp/qemu-conf--8065-.c:3: error: ‘SDL_INIT_VIDEO’ undeclared (first use in this function) /tmp/qemu-conf--8065-.c:3: error: (Each undeclared identifier is reported only once /tmp/qemu-conf--8065-.c:3: error: for each function it appears in.) qemu successfuly configured for Xen qemu-dm build make -C ioemu-dir install make[3]: Entering directory `/usr/src/xen-unstable.hg/tools/ioemu-remote'' xen-hooks.mak:56: === pciutils-dev package not found - missing /usr/include/pci xen-hooks.mak:57: === PCI passthrough capability has been disabled make[4]: Entering directory `/usr/src/xen-unstable.hg/tools/ioemu-remote/i386-dm'' ../xen-hooks.mak:56: === pciutils-dev package not found - missing /usr/include/pci ../xen-hooks.mak:57: === PCI passthrough capability has been disabled ../xen-hooks.mak:56: === pciutils-dev package not found - missing /usr/include/pci ../xen-hooks.mak:57: === PCI passthrough capability has been disabled LINK i386-dm/qemu-dm vl.o: In function `main'': /usr/src/xen-unstable.hg/tools/ioemu-dir/vl.c:5898: undefined reference to `pci_emulation_add'' collect2: ld returned 1 exit status make[4]: *** [qemu-dm] Error 1 make[4]: Leaving directory `/usr/src/xen-unstable.hg/tools/ioemu-remote/i386-dm'' make[3]: *** [subdir-i386-dm] Error 2 make[3]: Leaving directory `/usr/src/xen-unstable.hg/tools/ioemu-remote'' make[2]: *** [subdir-install-ioemu-dir] Error 2 make[2]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make: *** [install-tools] Error 2 --- On Tue, 3/17/09, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris Derzhavets <bderzhavets@yahoo.com> Subject: [Xen-devel] Failure to build fs-backend with the most recent Xen Unstable ( revesion 19374) To: "Keir Fraser" <keir.fraser@eu.citrix.com> Cc: "Xen-devel" <xen-devel@lists.xensource.com>, "Ian Jackson" <Ian.Jackson@eu.citrix.com> Date: Tuesday, March 17, 2009, 11:52 PM make -C fs-back install make[3]: Entering directory `/usr/src/xen-unstable.hg/tools/fs-back'' gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-xenbus.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -c -o fs-xenbus.o fs-xenbus.c gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-ops.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -c -o fs-ops.o fs-ops.c gcc -O1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wno-unused-value -Wdeclaration-after-statement -D__XEN_TOOLS__ -MMD -MF .fs-backend.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -Werror -Wno-unused -fno-strict-aliasing -I../../tools/libxc -I../../tools/include -I../../tools/xenstore -I../../tools/include -I.. -I../lib -I. -D_GNU_SOURCE -o fs-backend fs-xenbus.o fs-ops.o -L. -L.. -L../lib -L../../tools/libxc -lxenctrl -L../../tools/xenstore -lxenstore -lrt fs-backend.c cc1: warnings being treated as errors fs-backend.c: In function ‘aio_signal_handler’: fs-backend.c:382: error: ignoring return value of ‘write’, declared with attribute warn_unused_result make[3]: *** [fs-backend] Error 1 make[3]: Leaving directory `/usr/src/xen-unstable.hg/tools/fs-back'' make[2]: *** [subdir-install-fs-back] Error 2 make[2]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make[1]: *** [subdirs-install] Error 2 make[1]: Leaving directory `/usr/src/xen-unstable.hg/tools'' make: *** [install-tools] Error 2 Boris. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Well, it was my fault in the first place! K. On 18/03/2009 01:09, "Jiang, Yunhong" <yunhong.jiang@intel.com> wrote:> Sorry I didn''t check mail last night and really thank to fix this. > > Yunhong Jiang > > Andrew Lyon <mailto:andrew.lyon@gmail.com> wrote: >> On Tue, Mar 17, 2009 at 4:14 PM, Keir Fraser >> <keir.fraser@eu.citrix.com> wrote: >>> On 17/03/2009 16:08, "Andrew Lyon" <andrew.lyon@gmail.com> wrote: >>> >>>>> Actually I think I found the bug now, in which case it is fixed by c/s >>>>> 19374. So please just pull latest xen-unstable and try that. >>>> >>>> Would your revert patch also have fixed the bug? because so far I''ve >>>> not had a crash... >>> >>> Yes, it was in the changeset that I asked you to revert. >>> >>> -- Keir >>> >>> >>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >>> >> >> Ok, reverting the changeset seems to have fixed the problem, >> previously I could rarely get into double figures domain numbers >> before hitting the crash, I''ve just started domain id 55. >> >> I will revert the revet patch and pull 19374 tomorrow, and I will >> confirm that it has fixed the crash. >> >> Andy_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel