Pasi Kärkkäinen
2010-Jun-06 17:33 UTC
[Xen-devel] xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
Hello, I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, and I got this: http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt Jun 5 20:28:46 f13 kernel: device vif1.0 entered promiscuous mode Jun 5 20:28:46 f13 kernel: virbr0: topology change detected, propagating Jun 5 20:28:46 f13 kernel: virbr0: port 1(vif1.0) entering forwarding state Jun 5 20:28:47 f13 avahi-daemon[1384]: Registering new address record for fe80::fcff:ffff:feff:ffff on vif1.0.*. Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: =====================================================Jun 5 20:29:28 f13 kernel: [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ] Jun 5 20:29:28 f13 kernel: 2.6.32.15 #2 Jun 5 20:29:28 f13 kernel: ------------------------------------------------------ Jun 5 20:29:28 f13 kernel: xenstored/1508 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: Jun 5 20:29:28 f13 kernel: (proc_subdir_lock){+.+...}, at: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: and this task is already holding: Jun 5 20:29:28 f13 kernel: (&port_user_lock){-.-...}, at: [<ffffffffa015a390>] evtchn_ioctl+0x1ae/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: which would create a new lock dependency: Jun 5 20:29:28 f13 kernel: (&port_user_lock){-.-...} -> (proc_subdir_lock){+.+...} Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: but this new dependency connects a HARDIRQ-irq-safe lock: Jun 5 20:29:28 f13 kernel: (&port_user_lock){-.-...} Jun 5 20:29:28 f13 kernel: ... which became HARDIRQ-irq-safe at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c2e8>] __lock_acquire+0x2d6/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a815>] evtchn_interrupt+0x2b/0x119 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 Jun 5 20:29:28 f13 kernel: [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf Jun 5 20:29:28 f13 kernel: [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c Jun 5 20:29:28 f13 kernel: [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: to a HARDIRQ-irq-unsafe lock: Jun 5 20:29:28 f13 kernel: (proc_subdir_lock){+.+...} Jun 5 20:29:28 f13 kernel: ... which became HARDIRQ-irq-unsafe at: Jun 5 20:29:28 f13 kernel: ... [<ffffffff8108c35c>] __lock_acquire+0x34a/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117c8ef>] __proc_create+0x44/0x125 Jun 5 20:29:28 f13 kernel: [<ffffffff8117cc05>] proc_symlink+0x30/0xa3 Jun 5 20:29:28 f13 kernel: [<ffffffff819b8fde>] proc_root_init+0x62/0xb9 Jun 5 20:29:28 f13 kernel: [<ffffffff81995d49>] start_kernel+0x400/0x433 Jun 5 20:29:28 f13 kernel: [<ffffffff819952c1>] x86_64_start_reservations+0xac/0xb0 Jun 5 20:29:28 f13 kernel: [<ffffffff819990f3>] xen_start_kernel+0x659/0x660 Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: other info that might help us debug this: Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: 2 locks held by xenstored/1508: Jun 5 20:29:28 f13 kernel: #0: (&u->bind_mutex){+.+.+.}, at: [<ffffffffa015a20b>] evtchn_ioctl+0x29/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: #1: (&port_user_lock){-.-...}, at: [<ffffffffa015a390>] evtchn_ioctl+0x1ae/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: the dependencies between HARDIRQ-irq-safe lock and the holding lock: Jun 5 20:29:28 f13 kernel: -> (&port_user_lock){-.-...} ops: 0 { Jun 5 20:29:28 f13 kernel: IN-HARDIRQ-W at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c2e8>] __lock_acquire+0x2d6/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a815>] evtchn_interrupt+0x2b/0x119 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 Jun 5 20:29:28 f13 kernel: [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf Jun 5 20:29:28 f13 kernel: [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c Jun 5 20:29:28 f13 kernel: [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 Jun 5 20:29:28 f13 kernel: IN-SOFTIRQ-W at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c309>] __lock_acquire+0x2f7/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a815>] evtchn_interrupt+0x2b/0x119 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 Jun 5 20:29:28 f13 kernel: [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf Jun 5 20:29:28 f13 kernel: [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c Jun 5 20:29:28 f13 kernel: [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 Jun 5 20:29:28 f13 kernel: INITIAL USE at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c3d3>] __lock_acquire+0x3c1/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a815>] evtchn_interrupt+0x2b/0x119 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 Jun 5 20:29:28 f13 kernel: [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf Jun 5 20:29:28 f13 kernel: [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c Jun 5 20:29:28 f13 kernel: [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 Jun 5 20:29:28 f13 kernel: } Jun 5 20:29:28 f13 kernel: ... key at: [<ffffffffa015afb8>] __key.27039+0x0/0xfffffffffffffaaa [xen_evtchn] Jun 5 20:29:28 f13 kernel: ... acquired at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108bfb1>] check_irq_usage+0x5d/0xbe Jun 5 20:29:28 f13 kernel: [<ffffffff8108ca6a>] __lock_acquire+0xa58/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117be94>] remove_proc_entry+0x34/0x221 Jun 5 20:29:28 f13 kernel: [<ffffffff810bab5b>] unregister_handler_proc+0x31/0x35 Jun 5 20:29:28 f13 kernel: [<ffffffff810b925a>] __free_irq+0x104/0x193 Jun 5 20:29:28 f13 kernel: [<ffffffff810b9321>] free_irq+0x38/0x53 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7c03>] unbind_from_irqhandler+0x15/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a09a>] evtchn_unbind_from_user+0x23/0x36 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffffa015a3c7>] evtchn_ioctl+0x1e5/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff81135e95>] vfs_ioctl+0x22/0x87 Jun 5 20:29:28 f13 kernel: [<ffffffff811363fe>] do_vfs_ioctl+0x488/0x4ce Jun 5 20:29:28 f13 kernel: [<ffffffff8113649a>] sys_ioctl+0x56/0x79 Jun 5 20:29:28 f13 kernel: [<ffffffff81012cb2>] system_call_fastpath+0x16/0x1b Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: Jun 5 20:29:28 f13 kernel: -> (proc_subdir_lock){+.+...} ops: 0 { Jun 5 20:29:28 f13 kernel: HARDIRQ-ON-W at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c35c>] __lock_acquire+0x34a/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117c8ef>] __proc_create+0x44/0x125 Jun 5 20:29:28 f13 kernel: [<ffffffff8117cc05>] proc_symlink+0x30/0xa3 Jun 5 20:29:28 f13 kernel: [<ffffffff819b8fde>] proc_root_init+0x62/0xb9 Jun 5 20:29:28 f13 kernel: [<ffffffff81995d49>] start_kernel+0x400/0x433 Jun 5 20:29:28 f13 kernel: [<ffffffff819952c1>] x86_64_start_reservations+0xac/0xb0 Jun 5 20:29:28 f13 kernel: [<ffffffff819990f3>] xen_start_kernel+0x659/0x660 Jun 5 20:29:28 f13 kernel: SOFTIRQ-ON-W at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c37d>] __lock_acquire+0x36b/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117c8ef>] __proc_create+0x44/0x125 Jun 5 20:29:28 f13 kernel: [<ffffffff8117cc05>] proc_symlink+0x30/0xa3 Jun 5 20:29:28 f13 kernel: [<ffffffff819b8fde>] proc_root_init+0x62/0xb9 Jun 5 20:29:28 f13 kernel: [<ffffffff81995d49>] start_kernel+0x400/0x433 Jun 5 20:29:28 f13 kernel: [<ffffffff819952c1>] x86_64_start_reservations+0xac/0xb0 Jun 5 20:29:28 f13 kernel: [<ffffffff819990f3>] xen_start_kernel+0x659/0x660 Jun 5 20:29:28 f13 kernel: INITIAL USE at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108c3d3>] __lock_acquire+0x3c1/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117c8ef>] __proc_create+0x44/0x125 Jun 5 20:29:28 f13 kernel: [<ffffffff8117cc05>] proc_symlink+0x30/0xa3 Jun 5 20:29:28 f13 kernel: [<ffffffff819b8fde>] proc_root_init+0x62/0xb9 Jun 5 20:29:28 f13 kernel: [<ffffffff81995d49>] start_kernel+0x400/0x433 Jun 5 20:29:28 f13 kernel: [<ffffffff819952c1>] x86_64_start_reservations+0xac/0xb0 Jun 5 20:29:28 f13 kernel: [<ffffffff819990f3>] xen_start_kernel+0x659/0x660 Jun 5 20:29:28 f13 kernel: } Jun 5 20:29:28 f13 kernel: ... key at: [<ffffffff8172b238>] proc_subdir_lock+0x18/0x40 Jun 5 20:29:28 f13 kernel: ... acquired at: Jun 5 20:29:28 f13 kernel: [<ffffffff8108bfb1>] check_irq_usage+0x5d/0xbe Jun 5 20:29:28 f13 kernel: [<ffffffff8108ca6a>] __lock_acquire+0xa58/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117be94>] remove_proc_entry+0x34/0x221 Jun 5 20:29:28 f13 kernel: [<ffffffff810bab5b>] unregister_handler_proc+0x31/0x35 Jun 5 20:29:28 f13 kernel: [<ffffffff810b925a>] __free_irq+0x104/0x193 Jun 5 20:29:28 f13 kernel: [<ffffffff810b9321>] free_irq+0x38/0x53 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7c03>] unbind_from_irqhandler+0x15/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a09a>] evtchn_unbind_from_user+0x23/0x36 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffffa015a3c7>] evtchn_ioctl+0x1e5/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffff81135e95>] vfs_ioctl+0x22/0x87 Jun 5 20:29:28 f13 kernel: [<ffffffff811363fe>] do_vfs_ioctl+0x488/0x4ce Jun 5 20:29:28 f13 kernel: [<ffffffff8113649a>] sys_ioctl+0x56/0x79 Jun 5 20:29:28 f13 kernel: [<ffffffff81012cb2>] system_call_fastpath+0x16/0x1b Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: Jun 5 20:29:28 f13 kernel: stack backtrace: Jun 5 20:29:28 f13 kernel: Pid: 1508, comm: xenstored Not tainted 2.6.32.15 #2 Jun 5 20:29:28 f13 kernel: Call Trace: Jun 5 20:29:28 f13 kernel: [<ffffffff8108bf40>] check_usage+0x349/0x35d Jun 5 20:29:28 f13 kernel: [<ffffffff8100f465>] ? xen_sched_clock+0x14/0x8c Jun 5 20:29:28 f13 kernel: [<ffffffff8100f572>] ? check_events+0x12/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffff8108bfb1>] check_irq_usage+0x5d/0xbe Jun 5 20:29:28 f13 kernel: [<ffffffff8108ca6a>] __lock_acquire+0xa58/0xd3a Jun 5 20:29:28 f13 kernel: [<ffffffff8100f465>] ? xen_sched_clock+0x14/0x8c Jun 5 20:29:28 f13 kernel: [<ffffffff8108ce28>] lock_acquire+0xdc/0x102 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] ? xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8100f572>] ? check_events+0x12/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffff81170076>] ? posix_acl_clone+0x10/0x31 Jun 5 20:29:28 f13 kernel: [<ffffffff81478cf1>] _spin_lock+0x36/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] ? xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117bd76>] xlate_proc_name+0x3d/0xbb Jun 5 20:29:28 f13 kernel: [<ffffffff8117be94>] remove_proc_entry+0x34/0x221 Jun 5 20:29:28 f13 kernel: [<ffffffff8100ede1>] ? xen_force_evtchn_callback+0xd/0xf Jun 5 20:29:28 f13 kernel: [<ffffffff8100f572>] ? check_events+0x12/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffff8100ede1>] ? xen_force_evtchn_callback+0xd/0xf Jun 5 20:29:28 f13 kernel: [<ffffffff8100f572>] ? check_events+0x12/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffff8100ede1>] ? xen_force_evtchn_callback+0xd/0xf Jun 5 20:29:28 f13 kernel: [<ffffffff8100f572>] ? check_events+0x12/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffff810bab5b>] unregister_handler_proc+0x31/0x35 Jun 5 20:29:28 f13 kernel: [<ffffffff810b925a>] __free_irq+0x104/0x193 Jun 5 20:29:28 f13 kernel: [<ffffffff810b9321>] free_irq+0x38/0x53 Jun 5 20:29:28 f13 kernel: [<ffffffff812b7c03>] unbind_from_irqhandler+0x15/0x20 Jun 5 20:29:28 f13 kernel: [<ffffffffa015a09a>] evtchn_unbind_from_user+0x23/0x36 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffffa015a3c7>] evtchn_ioctl+0x1e5/0x2c7 [xen_evtchn] Jun 5 20:29:28 f13 kernel: [<ffffffffa015643e>] ? privcmd_ioctl+0x75/0x87 [xenfs] Jun 5 20:29:28 f13 kernel: [<ffffffff81135e95>] vfs_ioctl+0x22/0x87 Jun 5 20:29:28 f13 kernel: [<ffffffff8100ede1>] ? xen_force_evtchn_callback+0xd/0xf Jun 5 20:29:28 f13 kernel: [<ffffffff811363fe>] do_vfs_ioctl+0x488/0x4ce Jun 5 20:29:28 f13 kernel: [<ffffffff81012cea>] ? sysret_check+0x2e/0x69 Jun 5 20:29:28 f13 kernel: [<ffffffff8113649a>] sys_ioctl+0x56/0x79 Jun 5 20:29:28 f13 kernel: [<ffffffff81012cb2>] system_call_fastpath+0x16/0x1b Jun 5 20:29:29 f13 avahi-daemon[1384]: Withdrawing address record for fe80::fcff:ffff:feff:ffff on vif1.0. Jun 5 20:29:29 f13 kernel: virbr0: port 1(vif1.0) entering disabled state Jun 5 20:29:29 f13 kernel: virbr0: port 1(vif1.0) entering disabled state Kernel .config here: http://pasik.reaktio.net/xen/pv_ops-dom0-debug/config-2.6.32.15-pvops-dom0-xen-stable-x86_64 Any ideas? -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2010-Jun-06 17:41 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote:> Hello, > > I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, > and I got this: > > http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt >Does this help?>From 3f5e554f669098c84c82ce75e7577f7e0f3fccde Mon Sep 17 00:00:00 2001From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Date: Fri, 28 May 2010 15:28:27 -0700 Subject: [PATCH] xen/evtchn: don''t do unbind_from_irqhandler under spinlock unbind_from_irqhandler can end up doing /proc operations, which can''t happen under a spinlock. So before removing the IRQ handler, disable the irq under the port_user lock (masking the underlying event channel and making sure the irq handler isn''t running concurrently and won''t start running), then remove the handler without the lock. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c index f79ac5c..6a3a129 100644 --- a/drivers/xen/evtchn.c +++ b/drivers/xen/evtchn.c @@ -375,10 +375,12 @@ static long evtchn_ioctl(struct file *file, break; } - evtchn_unbind_from_user(u, unbind.port); + disable_irq(irq_from_evtchn(unbind.port)); spin_unlock_irq(&port_user_lock); + evtchn_unbind_from_user(u, unbind.port); + rc = 0; break; } @@ -484,11 +486,18 @@ static int evtchn_release(struct inode *inode, struct file *filp) if (get_port_user(i) != u) continue; - evtchn_unbind_from_user(get_port_user(i), i); + disable_irq(irq_from_evtchn(i)); } spin_unlock_irq(&port_user_lock); + for (i = 0; i < NR_EVENT_CHANNELS; i++) { + if (get_port_user(i) != u) + continue; + + evtchn_unbind_from_user(get_port_user(i), i); + } + kfree(u->name); kfree(u); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Jun-06 18:54 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote:> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: > > Hello, > > > > I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, > > and I got this: > > > > http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt > > > > Does this help? >It gave failing hunks so I had to manually apply it to 2.6.32.15, but it seems to fix that issue. No "unsafe lock order" messages anymore. Thanks! -- Pasi> From 3f5e554f669098c84c82ce75e7577f7e0f3fccde Mon Sep 17 00:00:00 2001 > From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> > Date: Fri, 28 May 2010 15:28:27 -0700 > Subject: [PATCH] xen/evtchn: don''t do unbind_from_irqhandler under spinlock > > unbind_from_irqhandler can end up doing /proc operations, which can''t > happen under a spinlock. So before removing the IRQ handler, > disable the irq under the port_user lock (masking the underlying event > channel and making sure the irq handler isn''t running concurrently and > won''t start running), then remove the handler without the lock. > > Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> > > diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c > index f79ac5c..6a3a129 100644 > --- a/drivers/xen/evtchn.c > +++ b/drivers/xen/evtchn.c > @@ -375,10 +375,12 @@ static long evtchn_ioctl(struct file *file, > break; > } > > - evtchn_unbind_from_user(u, unbind.port); > + disable_irq(irq_from_evtchn(unbind.port)); > > spin_unlock_irq(&port_user_lock); > > + evtchn_unbind_from_user(u, unbind.port); > + > rc = 0; > break; > } > @@ -484,11 +486,18 @@ static int evtchn_release(struct inode *inode, struct file *filp) > if (get_port_user(i) != u) > continue; > > - evtchn_unbind_from_user(get_port_user(i), i); > + disable_irq(irq_from_evtchn(i)); > } > > spin_unlock_irq(&port_user_lock); > > + for (i = 0; i < NR_EVENT_CHANNELS; i++) { > + if (get_port_user(i) != u) > + continue; > + > + evtchn_unbind_from_user(get_port_user(i), i); > + } > + > kfree(u->name); > kfree(u); > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Jun-07 12:58 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote:> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: > > On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: > > > Hello, > > > > > > I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, > > > and I got this: > > > > > > http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt > > > > > > > Does this help? > > > > It gave failing hunks so I had to manually apply it to 2.6.32.15, > but it seems to fix that issue. No "unsafe lock order" messages anymore. >Hmm.. it seems I still get this: device vif1.0 entered promiscuous mode virbr0: topology change detected, propagating virbr0: port 1(vif1.0) entering forwarding state alloc irq_desc for 1242 on node 0 alloc kstat_irqs on node 0 alloc irq_desc for 1241 on node 0 alloc kstat_irqs on node 0 alloc irq_desc for 1240 on node 0 alloc kstat_irqs on node 0 alloc irq_desc for 1239 on node 0 alloc kstat_irqs on node 0 ------------[ cut here ]------------ WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() Hardware name: X7SB4/E Modules linked in: xen_gntdev ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_ evtchn xenfs e1000e iTCO_wdt i2c_i801 joydev iTCO_vendor_support serio_raw shpchp pcspkr floppy usb_storage video output aic79xx scsi_transport_spi rade on ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan] Pid: 23, comm: xenwatch Not tainted 2.6.32.15 #3 Call Trace: <IRQ> [<ffffffff81059c11>] warn_slowpath_common+0x7c/0x94 [<ffffffff81478acb>] ? _spin_unlock_irq+0x30/0x3c [<ffffffff81059c3d>] warn_slowpath_null+0x14/0x16 [<ffffffff8108b156>] trace_hardirqs_on_caller+0xb7/0x135 [<ffffffff8108b1e1>] trace_hardirqs_on+0xd/0xf [<ffffffff81478acb>] _spin_unlock_irq+0x30/0x3c [<ffffffff812c19b9>] add_to_net_schedule_list_tail+0x92/0x9b [<ffffffff812c19fa>] netif_be_int+0x38/0xd0 [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b [<ffffffff812b9fe3>] ? notify_remote_via_evtchn+0x1e/0x44 [<ffffffff81477801>] ? __mutex_lock_common+0x36a/0x37b [<ffffffff812ba966>] ? xs_talkv+0x5c/0x174 [<ffffffff812ba354>] ? xb_write+0x16e/0x18a [<ffffffff812ba974>] ? xs_talkv+0x6a/0x174 [<ffffffff81242c46>] ? kasprintf+0x38/0x3a [<ffffffff812babc3>] ? xs_single+0x3a/0x3c [<ffffffff812bb271>] ? xenbus_read+0x42/0x5b [<ffffffff812c416c>] ? frontend_changed+0x649/0x675 [<ffffffff812bc453>] ? xenbus_otherend_changed+0xe9/0x176 [<ffffffff8100f55f>] ? xen_restore_fl_direct_end+0x0/0x1 [<ffffffff8108d91e>] ? lock_release+0x198/0x1a5 [<ffffffff812bca7e>] ? frontend_changed+0x10/0x12 [<ffffffff812ba6eb>] ? xenwatch_thread+0x111/0x14c [<ffffffff81079d4a>] ? autoremove_wake_function+0x0/0x39 [<ffffffff812ba5da>] ? xenwatch_thread+0x0/0x14c [<ffffffff81079a78>] ? kthread+0x7f/0x87 [<ffffffff81013dea>] ? child_rip+0xa/0x20 [<ffffffff81013750>] ? restore_args+0x0/0x30 [<ffffffff81013de0>] ? child_rip+0x0/0x20 ---[ end trace c5022d288d3812ac ]--- blkback: ring-ref 770, event-channel 9, protocol 2 (x86_32-abi) alloc irq_desc for 1238 on node 0 alloc kstat_irqs on node 0 vif1.0: no IPv6 routers present -- Pasi> > > From 3f5e554f669098c84c82ce75e7577f7e0f3fccde Mon Sep 17 00:00:00 2001 > > From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> > > Date: Fri, 28 May 2010 15:28:27 -0700 > > Subject: [PATCH] xen/evtchn: don''t do unbind_from_irqhandler under spinlock > > > > unbind_from_irqhandler can end up doing /proc operations, which can''t > > happen under a spinlock. So before removing the IRQ handler, > > disable the irq under the port_user lock (masking the underlying event > > channel and making sure the irq handler isn''t running concurrently and > > won''t start running), then remove the handler without the lock. > > > > Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> > > > > diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c > > index f79ac5c..6a3a129 100644 > > --- a/drivers/xen/evtchn.c > > +++ b/drivers/xen/evtchn.c > > @@ -375,10 +375,12 @@ static long evtchn_ioctl(struct file *file, > > break; > > } > > > > - evtchn_unbind_from_user(u, unbind.port); > > + disable_irq(irq_from_evtchn(unbind.port)); > > > > spin_unlock_irq(&port_user_lock); > > > > + evtchn_unbind_from_user(u, unbind.port); > > + > > rc = 0; > > break; > > } > > @@ -484,11 +486,18 @@ static int evtchn_release(struct inode *inode, struct file *filp) > > if (get_port_user(i) != u) > > continue; > > > > - evtchn_unbind_from_user(get_port_user(i), i); > > + disable_irq(irq_from_evtchn(i)); > > } > > > > spin_unlock_irq(&port_user_lock); > > > > + for (i = 0; i < NR_EVENT_CHANNELS; i++) { > > + if (get_port_user(i) != u) > > + continue; > > + > > + evtchn_unbind_from_user(get_port_user(i), i); > > + } > > + > > kfree(u->name); > > kfree(u); > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2010-Jun-07 16:50 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote:> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote: > >> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: >> >>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: >>> >>>> Hello, >>>> >>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, >>>> and I got this: >>>> >>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt >>>> >>>> >>> Does this help? >>> >>> >> It gave failing hunks so I had to manually apply it to 2.6.32.15, >> but it seems to fix that issue. No "unsafe lock order" messages anymore. >> >> > Hmm.. it seems I still get this: >OK, thanks. Let me look at it; that was a first cut patch I did the other day when I noticed the problem, but I hadn''t got around to testing it myself. J> > device vif1.0 entered promiscuous mode > virbr0: topology change detected, propagating > virbr0: port 1(vif1.0) entering forwarding state > alloc irq_desc for 1242 on node 0 > alloc kstat_irqs on node 0 > alloc irq_desc for 1241 on node 0 > alloc kstat_irqs on node 0 > alloc irq_desc for 1240 on node 0 > alloc kstat_irqs on node 0 > alloc irq_desc for 1239 on node 0 > alloc kstat_irqs on node 0 > ------------[ cut here ]------------ > WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() > Hardware name: X7SB4/E > Modules linked in: xen_gntdev ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_ > evtchn xenfs e1000e iTCO_wdt i2c_i801 joydev iTCO_vendor_support serio_raw shpchp pcspkr floppy usb_storage video output aic79xx scsi_transport_spi rade > on ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan] > Pid: 23, comm: xenwatch Not tainted 2.6.32.15 #3 > Call Trace: > <IRQ> [<ffffffff81059c11>] warn_slowpath_common+0x7c/0x94 > [<ffffffff81478acb>] ? _spin_unlock_irq+0x30/0x3c > [<ffffffff81059c3d>] warn_slowpath_null+0x14/0x16 > [<ffffffff8108b156>] trace_hardirqs_on_caller+0xb7/0x135 > [<ffffffff8108b1e1>] trace_hardirqs_on+0xd/0xf > [<ffffffff81478acb>] _spin_unlock_irq+0x30/0x3c > [<ffffffff812c19b9>] add_to_net_schedule_list_tail+0x92/0x9b > [<ffffffff812c19fa>] netif_be_int+0x38/0xd0 > [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119 > [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf > [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168 > [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c > [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 > <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > [<ffffffff812b9fe3>] ? notify_remote_via_evtchn+0x1e/0x44 > [<ffffffff81477801>] ? __mutex_lock_common+0x36a/0x37b > [<ffffffff812ba966>] ? xs_talkv+0x5c/0x174 > [<ffffffff812ba354>] ? xb_write+0x16e/0x18a > [<ffffffff812ba974>] ? xs_talkv+0x6a/0x174 > [<ffffffff81242c46>] ? kasprintf+0x38/0x3a > [<ffffffff812babc3>] ? xs_single+0x3a/0x3c > [<ffffffff812bb271>] ? xenbus_read+0x42/0x5b > [<ffffffff812c416c>] ? frontend_changed+0x649/0x675 > [<ffffffff812bc453>] ? xenbus_otherend_changed+0xe9/0x176 > [<ffffffff8100f55f>] ? xen_restore_fl_direct_end+0x0/0x1 > [<ffffffff8108d91e>] ? lock_release+0x198/0x1a5 > [<ffffffff812bca7e>] ? frontend_changed+0x10/0x12 > [<ffffffff812ba6eb>] ? xenwatch_thread+0x111/0x14c > [<ffffffff81079d4a>] ? autoremove_wake_function+0x0/0x39 > [<ffffffff812ba5da>] ? xenwatch_thread+0x0/0x14c > [<ffffffff81079a78>] ? kthread+0x7f/0x87 > [<ffffffff81013dea>] ? child_rip+0xa/0x20 > [<ffffffff81013750>] ? restore_args+0x0/0x30 > [<ffffffff81013de0>] ? child_rip+0x0/0x20 > ---[ end trace c5022d288d3812ac ]--- > blkback: ring-ref 770, event-channel 9, protocol 2 (x86_32-abi) > alloc irq_desc for 1238 on node 0 > alloc kstat_irqs on node 0 > vif1.0: no IPv6 routers present > > > > -- Pasi > > > > > >> >>> From 3f5e554f669098c84c82ce75e7577f7e0f3fccde Mon Sep 17 00:00:00 2001 >>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> >>> Date: Fri, 28 May 2010 15:28:27 -0700 >>> Subject: [PATCH] xen/evtchn: don''t do unbind_from_irqhandler under spinlock >>> >>> unbind_from_irqhandler can end up doing /proc operations, which can''t >>> happen under a spinlock. So before removing the IRQ handler, >>> disable the irq under the port_user lock (masking the underlying event >>> channel and making sure the irq handler isn''t running concurrently and >>> won''t start running), then remove the handler without the lock. >>> >>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> >>> >>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c >>> index f79ac5c..6a3a129 100644 >>> --- a/drivers/xen/evtchn.c >>> +++ b/drivers/xen/evtchn.c >>> @@ -375,10 +375,12 @@ static long evtchn_ioctl(struct file *file, >>> break; >>> } >>> >>> - evtchn_unbind_from_user(u, unbind.port); >>> + disable_irq(irq_from_evtchn(unbind.port)); >>> >>> spin_unlock_irq(&port_user_lock); >>> >>> + evtchn_unbind_from_user(u, unbind.port); >>> + >>> rc = 0; >>> break; >>> } >>> @@ -484,11 +486,18 @@ static int evtchn_release(struct inode *inode, struct file *filp) >>> if (get_port_user(i) != u) >>> continue; >>> >>> - evtchn_unbind_from_user(get_port_user(i), i); >>> + disable_irq(irq_from_evtchn(i)); >>> } >>> >>> spin_unlock_irq(&port_user_lock); >>> >>> + for (i = 0; i < NR_EVENT_CHANNELS; i++) { >>> + if (get_port_user(i) != u) >>> + continue; >>> + >>> + evtchn_unbind_from_user(get_port_user(i), i); >>> + } >>> + >>> kfree(u->name); >>> kfree(u); >>> >>> >>> >>> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Jul-20 11:53 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On Mon, Jun 07, 2010 at 09:50:13AM -0700, Jeremy Fitzhardinge wrote:> On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote: > > On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote: > > > >> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: > >> > >>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: > >>> > >>>> Hello, > >>>> > >>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, > >>>> and I got this: > >>>> > >>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt > >>>> > >>>> > >>> Does this help? > >>> > >>> > >> It gave failing hunks so I had to manually apply it to 2.6.32.15, > >> but it seems to fix that issue. No "unsafe lock order" messages anymore. > >> > >> > > Hmm.. it seems I still get this: > > > > OK, thanks. Let me look at it; that was a first cut patch I did the > other day when I noticed the problem, but I hadn''t got around to testing > it myself. >I just tried the latest xen/stable-2.6.32.16 (as of today), without any additional patches, and I get this: device vif1.0 entered promiscuous mode virbr0: topology change detected, propagating virbr0: port 1(vif1.0) entering forwarding state alloc irq_desc for 1242 on node 0 alloc kstat_irqs on node 0 alloc irq_desc for 1241 on node 0 alloc kstat_irqs on node 0 alloc irq_desc for 1240 on node 0 alloc kstat_irqs on node 0 blkback: ring-ref 8, event-channel 8, protocol 1 (x86_64-abi) alloc irq_desc for 1239 on node 0 alloc kstat_irqs on node 0 vif1.0: no IPv6 routers present alloc irq_desc for 1238 on node 0 alloc kstat_irqs on node 0 ------------[ cut here ]------------ WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() Hardware name: X7SB4/E Modules linked in: ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_gntdev xen_evtchn xenfs e1000e shpchp i2c_i801 pcspkr iTCO_wdt iTCO_vendor_support serio_raw joydev floppy usb_storage video output aic79xx scsi_transport_spi radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan] Pid: 23, comm: xenwatch Not tainted 2.6.32.16 #6 Call Trace: <IRQ> [<ffffffff81059c41>] warn_slowpath_common+0x7c/0x94 [<ffffffff8147896b>] ? _spin_unlock_irq+0x30/0x3c [<ffffffff81059c6d>] warn_slowpath_null+0x14/0x16 [<ffffffff8108b186>] trace_hardirqs_on_caller+0xb7/0x135 [<ffffffff8108b211>] trace_hardirqs_on+0xd/0xf [<ffffffff8147896b>] _spin_unlock_irq+0x30/0x3c [<ffffffff812c15d7>] add_to_net_schedule_list_tail+0x92/0x9b [<ffffffff812c1618>] netif_be_int+0x38/0xcd [<ffffffff810b8144>] handle_IRQ_event+0x53/0x119 [<ffffffff810ba0e6>] handle_level_irq+0x7d/0xdf [<ffffffff812b6f45>] __xen_evtchn_do_upcall+0xe1/0x16e [<ffffffff812b74b8>] xen_evtchn_do_upcall+0x37/0x4c [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b [<ffffffff812b9f9b>] ? notify_remote_via_evtchn+0x1e/0x44 [<ffffffff814776a1>] ? __mutex_lock_common+0x36a/0x37b [<ffffffff812ba8b8>] ? xs_talkv+0x5c/0x174 [<ffffffff812ba30c>] ? xb_write+0x16e/0x18a [<ffffffff812ba8c6>] ? xs_talkv+0x6a/0x174 [<ffffffff81242b86>] ? kasprintf+0x38/0x3a [<ffffffff812bab15>] ? xs_single+0x3a/0x3c [<ffffffff812bb0f0>] ? xenbus_read+0x42/0x5b [<ffffffff812c3de4>] ? frontend_changed+0x655/0x681 [<ffffffff812bc0e7>] ? xenbus_otherend_changed+0xe9/0x176 [<ffffffff8100f34f>] ? xen_restore_fl_direct_end+0x0/0x1 [<ffffffff8108d94e>] ? lock_release+0x198/0x1a5 [<ffffffff812bc712>] ? frontend_changed+0x10/0x12 [<ffffffff812ba63d>] ? xenwatch_thread+0x111/0x14c [<ffffffff81079d7a>] ? autoremove_wake_function+0x0/0x39 [<ffffffff812ba52c>] ? xenwatch_thread+0x0/0x14c [<ffffffff81079aa8>] ? kthread+0x7f/0x87 [<ffffffff81013dea>] ? child_rip+0xa/0x20 [<ffffffff81013750>] ? restore_args+0x0/0x30 [<ffffffff81013de0>] ? child_rip+0x0/0x20 ---[ end trace b036c0423b0ee26a ]--- alloc irq_desc for 1237 on node 0 alloc kstat_irqs on node 0 device vif2.0 entered promiscuous mode -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2010-Jul-20 16:17 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On 07/20/2010 04:53 AM, Pasi Kärkkäinen wrote:> On Mon, Jun 07, 2010 at 09:50:13AM -0700, Jeremy Fitzhardinge wrote: > >> On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote: >> >>> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote: >>> >>> >>>> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: >>>> >>>> >>>>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: >>>>> >>>>> >>>>>> Hello, >>>>>> >>>>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, >>>>>> and I got this: >>>>>> >>>>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt >>>>>> >>>>>> >>>>>> >>>>> Does this help? >>>>> >>>>> >>>>> >>>> It gave failing hunks so I had to manually apply it to 2.6.32.15, >>>> but it seems to fix that issue. No "unsafe lock order" messages anymore. >>>> >>>> >>>> >>> Hmm.. it seems I still get this: >>> >>> >> OK, thanks. Let me look at it; that was a first cut patch I did the >> other day when I noticed the problem, but I hadn''t got around to testing >> it myself. >> >> > I just tried the latest xen/stable-2.6.32.16 (as of today), > without any additional patches, and I get this: >Is that new? J> > device vif1.0 entered promiscuous mode > virbr0: topology change detected, propagating > virbr0: port 1(vif1.0) entering forwarding state > alloc irq_desc for 1242 on node 0 > alloc kstat_irqs on node 0 > alloc irq_desc for 1241 on node 0 > alloc kstat_irqs on node 0 > alloc irq_desc for 1240 on node 0 > alloc kstat_irqs on node 0 > blkback: ring-ref 8, event-channel 8, protocol 1 (x86_64-abi) > alloc irq_desc for 1239 on node 0 > alloc kstat_irqs on node 0 > vif1.0: no IPv6 routers present > alloc irq_desc for 1238 on node 0 > alloc kstat_irqs on node 0 > ------------[ cut here ]------------ > WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() > Hardware name: X7SB4/E > Modules linked in: ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_gntdev xen_evtchn xenfs e1000e shpchp i2c_i801 pcspkr iTCO_wdt iTCO_vendor_support serio_raw joydev floppy usb_storage video output aic79xx scsi_transport_spi radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan] > Pid: 23, comm: xenwatch Not tainted 2.6.32.16 #6 > Call Trace: > <IRQ> [<ffffffff81059c41>] warn_slowpath_common+0x7c/0x94 > [<ffffffff8147896b>] ? _spin_unlock_irq+0x30/0x3c > [<ffffffff81059c6d>] warn_slowpath_null+0x14/0x16 > [<ffffffff8108b186>] trace_hardirqs_on_caller+0xb7/0x135 > [<ffffffff8108b211>] trace_hardirqs_on+0xd/0xf > [<ffffffff8147896b>] _spin_unlock_irq+0x30/0x3c > [<ffffffff812c15d7>] add_to_net_schedule_list_tail+0x92/0x9b > [<ffffffff812c1618>] netif_be_int+0x38/0xcd > [<ffffffff810b8144>] handle_IRQ_event+0x53/0x119 > [<ffffffff810ba0e6>] handle_level_irq+0x7d/0xdf > [<ffffffff812b6f45>] __xen_evtchn_do_upcall+0xe1/0x16e > [<ffffffff812b74b8>] xen_evtchn_do_upcall+0x37/0x4c > [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 > <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > [<ffffffff812b9f9b>] ? notify_remote_via_evtchn+0x1e/0x44 > [<ffffffff814776a1>] ? __mutex_lock_common+0x36a/0x37b > [<ffffffff812ba8b8>] ? xs_talkv+0x5c/0x174 > [<ffffffff812ba30c>] ? xb_write+0x16e/0x18a > [<ffffffff812ba8c6>] ? xs_talkv+0x6a/0x174 > [<ffffffff81242b86>] ? kasprintf+0x38/0x3a > [<ffffffff812bab15>] ? xs_single+0x3a/0x3c > [<ffffffff812bb0f0>] ? xenbus_read+0x42/0x5b > [<ffffffff812c3de4>] ? frontend_changed+0x655/0x681 > [<ffffffff812bc0e7>] ? xenbus_otherend_changed+0xe9/0x176 > [<ffffffff8100f34f>] ? xen_restore_fl_direct_end+0x0/0x1 > [<ffffffff8108d94e>] ? lock_release+0x198/0x1a5 > [<ffffffff812bc712>] ? frontend_changed+0x10/0x12 > [<ffffffff812ba63d>] ? xenwatch_thread+0x111/0x14c > [<ffffffff81079d7a>] ? autoremove_wake_function+0x0/0x39 > [<ffffffff812ba52c>] ? xenwatch_thread+0x0/0x14c > [<ffffffff81079aa8>] ? kthread+0x7f/0x87 > [<ffffffff81013dea>] ? child_rip+0xa/0x20 > [<ffffffff81013750>] ? restore_args+0x0/0x30 > [<ffffffff81013de0>] ? child_rip+0x0/0x20 > ---[ end trace b036c0423b0ee26a ]--- > alloc irq_desc for 1237 on node 0 > alloc kstat_irqs on node 0 > device vif2.0 entered promiscuous mode > > > -- Pasi > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Pasi Kärkkäinen
2010-Jul-20 17:17 UTC
[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On Tue, Jul 20, 2010 at 09:17:26AM -0700, Jeremy Fitzhardinge wrote:> On 07/20/2010 04:53 AM, Pasi Kärkkäinen wrote: > > On Mon, Jun 07, 2010 at 09:50:13AM -0700, Jeremy Fitzhardinge wrote: > > > >> On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote: > >> > >>> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote: > >>> > >>> > >>>> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: > >>>> > >>>> > >>>>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: > >>>>> > >>>>> > >>>>>> Hello, > >>>>>> > >>>>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with Xen 4.0.0, > >>>>>> and I got this: > >>>>>> > >>>>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt > >>>>>> > >>>>>> > >>>>>> > >>>>> Does this help? > >>>>> > >>>>> > >>>>> > >>>> It gave failing hunks so I had to manually apply it to 2.6.32.15, > >>>> but it seems to fix that issue. No "unsafe lock order" messages anymore. > >>>> > >>>> > >>>> > >>> Hmm.. it seems I still get this: > >>> > >>> > >> OK, thanks. Let me look at it; that was a first cut patch I did the > >> other day when I noticed the problem, but I hadn''t got around to testing > >> it myself. > >> > >> > > I just tried the latest xen/stable-2.6.32.16 (as of today), > > without any additional patches, and I get this: > > > > Is that new? >I think it''s the same as earlier.. -- Pasi> > > > > device vif1.0 entered promiscuous mode > > virbr0: topology change detected, propagating > > virbr0: port 1(vif1.0) entering forwarding state > > alloc irq_desc for 1242 on node 0 > > alloc kstat_irqs on node 0 > > alloc irq_desc for 1241 on node 0 > > alloc kstat_irqs on node 0 > > alloc irq_desc for 1240 on node 0 > > alloc kstat_irqs on node 0 > > blkback: ring-ref 8, event-channel 8, protocol 1 (x86_64-abi) > > alloc irq_desc for 1239 on node 0 > > alloc kstat_irqs on node 0 > > vif1.0: no IPv6 routers present > > alloc irq_desc for 1238 on node 0 > > alloc kstat_irqs on node 0 > > ------------[ cut here ]------------ > > WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() > > Hardware name: X7SB4/E > > Modules linked in: ipt_MASQUERADE iptable_nat nf_nat bridge stp llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_gntdev xen_evtchn xenfs e1000e shpchp i2c_i801 pcspkr iTCO_wdt iTCO_vendor_support serio_raw joydev floppy usb_storage video output aic79xx scsi_transport_spi radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan] > > Pid: 23, comm: xenwatch Not tainted 2.6.32.16 #6 > > Call Trace: > > <IRQ> [<ffffffff81059c41>] warn_slowpath_common+0x7c/0x94 > > [<ffffffff8147896b>] ? _spin_unlock_irq+0x30/0x3c > > [<ffffffff81059c6d>] warn_slowpath_null+0x14/0x16 > > [<ffffffff8108b186>] trace_hardirqs_on_caller+0xb7/0x135 > > [<ffffffff8108b211>] trace_hardirqs_on+0xd/0xf > > [<ffffffff8147896b>] _spin_unlock_irq+0x30/0x3c > > [<ffffffff812c15d7>] add_to_net_schedule_list_tail+0x92/0x9b > > [<ffffffff812c1618>] netif_be_int+0x38/0xcd > > [<ffffffff810b8144>] handle_IRQ_event+0x53/0x119 > > [<ffffffff810ba0e6>] handle_level_irq+0x7d/0xdf > > [<ffffffff812b6f45>] __xen_evtchn_do_upcall+0xe1/0x16e > > [<ffffffff812b74b8>] xen_evtchn_do_upcall+0x37/0x4c > > [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 > > <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > > [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > > [<ffffffff812b9f9b>] ? notify_remote_via_evtchn+0x1e/0x44 > > [<ffffffff814776a1>] ? __mutex_lock_common+0x36a/0x37b > > [<ffffffff812ba8b8>] ? xs_talkv+0x5c/0x174 > > [<ffffffff812ba30c>] ? xb_write+0x16e/0x18a > > [<ffffffff812ba8c6>] ? xs_talkv+0x6a/0x174 > > [<ffffffff81242b86>] ? kasprintf+0x38/0x3a > > [<ffffffff812bab15>] ? xs_single+0x3a/0x3c > > [<ffffffff812bb0f0>] ? xenbus_read+0x42/0x5b > > [<ffffffff812c3de4>] ? frontend_changed+0x655/0x681 > > [<ffffffff812bc0e7>] ? xenbus_otherend_changed+0xe9/0x176 > > [<ffffffff8100f34f>] ? xen_restore_fl_direct_end+0x0/0x1 > > [<ffffffff8108d94e>] ? lock_release+0x198/0x1a5 > > [<ffffffff812bc712>] ? frontend_changed+0x10/0x12 > > [<ffffffff812ba63d>] ? xenwatch_thread+0x111/0x14c > > [<ffffffff81079d7a>] ? autoremove_wake_function+0x0/0x39 > > [<ffffffff812ba52c>] ? xenwatch_thread+0x0/0x14c > > [<ffffffff81079aa8>] ? kthread+0x7f/0x87 > > [<ffffffff81013dea>] ? child_rip+0xa/0x20 > > [<ffffffff81013750>] ? restore_args+0x0/0x30 > > [<ffffffff81013de0>] ? child_rip+0x0/0x20 > > ---[ end trace b036c0423b0ee26a ]--- > > alloc irq_desc for 1237 on node 0 > > alloc kstat_irqs on node 0 > > device vif2.0 entered promiscuous mode > > > > > > -- Pasi > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Bruce Edge
2010-Aug-26 18:05 UTC
Re: [Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock
On Tue, Jul 20, 2010 at 10:17 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Tue, Jul 20, 2010 at 09:17:26AM -0700, Jeremy Fitzhardinge wrote: > > On 07/20/2010 04:53 AM, Pasi Kärkkäinen wrote: > > > On Mon, Jun 07, 2010 at 09:50:13AM -0700, Jeremy Fitzhardinge wrote: > > > > > >> On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote: > > >> > > >>> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote: > > >>> > > >>> > > >>>> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote: > > >>>> > > >>>> > > >>>>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote: > > >>>>> > > >>>>> > > >>>>>> Hello, > > >>>>>> > > >>>>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, > with Xen 4.0.0, > > >>>>>> and I got this: > > >>>>>> > > >>>>>> > http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>> Does this help? > > >>>>> > > >>>>> > > >>>>> > > >>>> It gave failing hunks so I had to manually apply it to 2.6.32.15, > > >>>> but it seems to fix that issue. No "unsafe lock order" messages > anymore. > > >>>> > > >>>> > > >>>> > > >>> Hmm.. it seems I still get this: > > >>> > > >>> > > >> OK, thanks. Let me look at it; that was a first cut patch I did the > > >> other day when I noticed the problem, but I hadn''t got around to > testing > > >> it myself. > > >> > > >> > > > I just tried the latest xen/stable-2.6.32.16 (as of today), > > > without any additional patches, and I get this: > > > > > > > Is that new? > > > > I think it''s the same as earlier.. > > -- Pasi > > > > > > > > > device vif1.0 entered promiscuous mode > > > virbr0: topology change detected, propagating > > > virbr0: port 1(vif1.0) entering forwarding state > > > alloc irq_desc for 1242 on node 0 > > > alloc kstat_irqs on node 0 > > > alloc irq_desc for 1241 on node 0 > > > alloc kstat_irqs on node 0 > > > alloc irq_desc for 1240 on node 0 > > > alloc kstat_irqs on node 0 > > > blkback: ring-ref 8, event-channel 8, protocol 1 (x86_64-abi) > > > alloc irq_desc for 1239 on node 0 > > > alloc kstat_irqs on node 0 > > > vif1.0: no IPv6 routers present > > > alloc irq_desc for 1238 on node 0 > > > alloc kstat_irqs on node 0 > > > ------------[ cut here ]------------ > > > WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135() > > > Hardware name: X7SB4/E > > > Modules linked in: ipt_MASQUERADE iptable_nat nf_nat bridge stp llc > sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 > xen_gntdev xen_evtchn xenfs e1000e shpchp i2c_i801 pcspkr iTCO_wdt > iTCO_vendor_support serio_raw joydev floppy usb_storage video output aic79xx > scsi_transport_spi radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last > unloaded: scsi_wait_scan] > > > Pid: 23, comm: xenwatch Not tainted 2.6.32.16 #6 > > > Call Trace: > > > <IRQ> [<ffffffff81059c41>] warn_slowpath_common+0x7c/0x94 > > > [<ffffffff8147896b>] ? _spin_unlock_irq+0x30/0x3c > > > [<ffffffff81059c6d>] warn_slowpath_null+0x14/0x16 > > > [<ffffffff8108b186>] trace_hardirqs_on_caller+0xb7/0x135 > > > [<ffffffff8108b211>] trace_hardirqs_on+0xd/0xf > > > [<ffffffff8147896b>] _spin_unlock_irq+0x30/0x3c > > > [<ffffffff812c15d7>] add_to_net_schedule_list_tail+0x92/0x9b > > > [<ffffffff812c1618>] netif_be_int+0x38/0xcd > > > [<ffffffff810b8144>] handle_IRQ_event+0x53/0x119 > > > [<ffffffff810ba0e6>] handle_level_irq+0x7d/0xdf > > > [<ffffffff812b6f45>] __xen_evtchn_do_upcall+0xe1/0x16e > > > [<ffffffff812b74b8>] xen_evtchn_do_upcall+0x37/0x4c > > > [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30 > > > <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > > > [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b > > > [<ffffffff812b9f9b>] ? notify_remote_via_evtchn+0x1e/0x44 > > > [<ffffffff814776a1>] ? __mutex_lock_common+0x36a/0x37b > > > [<ffffffff812ba8b8>] ? xs_talkv+0x5c/0x174 > > > [<ffffffff812ba30c>] ? xb_write+0x16e/0x18a > > > [<ffffffff812ba8c6>] ? xs_talkv+0x6a/0x174 > > > [<ffffffff81242b86>] ? kasprintf+0x38/0x3a > > > [<ffffffff812bab15>] ? xs_single+0x3a/0x3c > > > [<ffffffff812bb0f0>] ? xenbus_read+0x42/0x5b > > > [<ffffffff812c3de4>] ? frontend_changed+0x655/0x681 > > > [<ffffffff812bc0e7>] ? xenbus_otherend_changed+0xe9/0x176 > > > [<ffffffff8100f34f>] ? xen_restore_fl_direct_end+0x0/0x1 > > > [<ffffffff8108d94e>] ? lock_release+0x198/0x1a5 > > > [<ffffffff812bc712>] ? frontend_changed+0x10/0x12 > > > [<ffffffff812ba63d>] ? xenwatch_thread+0x111/0x14c > > > [<ffffffff81079d7a>] ? autoremove_wake_function+0x0/0x39 > > > [<ffffffff812ba52c>] ? xenwatch_thread+0x0/0x14c > > > [<ffffffff81079aa8>] ? kthread+0x7f/0x87 > > > [<ffffffff81013dea>] ? child_rip+0xa/0x20 > > > [<ffffffff81013750>] ? restore_args+0x0/0x30 > > > [<ffffffff81013de0>] ? child_rip+0x0/0x20 > > > ---[ end trace b036c0423b0ee26a ]--- > > > alloc irq_desc for 1237 on node 0 > > > alloc kstat_irqs on node 0 > > > device vif2.0 entered promiscuous mode > > > > > > > > > -- Pasi > > > > > > > > > >I''m also seeing this with the newly released xen 4.0.1 and a Not tainted 2.6.32.18 pvops sync''d yesterday: 2010-08-26T10:57:12.085252-07:00 kaan-18 [ 4698.128356] blkback: ring-ref 8, event-channel 93, protocol 1 (x86_64-abi) 2010-08-26T10:57:12.126039-07:00 kaan-18 [ 4698.168555] ------------[ cut here ]------------ 2010-08-26T10:57:12.126141-07:00 kaan-18 [ 4698.168569] WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0x12f/0x190() 2010-08-26T10:57:12.126220-07:00 kaan-18 [ 4698.168574] Hardware name: ProLiant DL380 G6 2010-08-26T10:57:12.126297-07:00 kaan-18 [ 4698.168578] Modules linked in: xt_physdev xenfs xen_gntdev xen_evtchn ipv6 xen_pciback t2010-08-26T10:57:12.126297-07:00 kaan-18 [ 4698.168578] Modules linked in: xt_physdev xenfs xen_gntdev xen_evtchn ipv6 xen_pciback t un fbcon tileblit font bitblit softcursor bridge stp llc radeon ttm drm_kms_helper lp drm i2c_algo_bit ipmi_si joydev i2c_core ipmi_msghandler hpilo hpwdt parport serio_raw usbhid hid cciss usb_storage 2010-08-26T10:57:12.126405-07:00 kaan-18 [ 4698.168644] Pid: 11, comm: xenwatch Not tainted 2.6.32.18 #1 2010-08-26T10:57:12.126482-07:00 kaan-18 [ 4698.168648] Call Trace: 2010-08-26T10:57:12.126559-07:00 kaan-18 [ 4698.168651] <IRQ> [<ffffffff810a9f7f>] ? trace_hardirqs_on_caller+0x12f/0x190 2010-08-26T10:57:12.126655-07:00 kaan-18 [ 4698.168663] [<ffffffff8106bd60>] warn_slowpath_common+0x80/0xd0 2010-08-26T10:57:12.126731-07:00 kaan-18 [ 4698.168671] [<ffffffff815f3d70>] ? _spin_unlock_irq+0x30/0x40 2010-08-26T10:57:12.126806-07:00 kaan-18 [ 4698.168676] [<ffffffff8106bdc4>] warn_slowpath_null+0x14/0x20 2010-08-26T10:57:12.126881-07:00 kaan-18 [ 4698.168682] [<ffffffff810a9f7f>] trace_hardirqs_on_caller+0x12f/0x190 2010-08-26T10:57:12.126957-07:00 kaan-18 [ 4698.168688] [<ffffffff810a9fed>] trace_hardirqs_on+0xd/0x10 2010-08-26T10:57:12.127034-07:00 kaan-18 [ 4698.168693] [<ffffffff815f3d70>] _spin_unlock_irq+0x30/0x40 2010-08-26T10:57:12.127110-07:00 kaan-18 [ 4698.168700] [<ffffffff813c4e75>] add_to_net_schedule_list_tail+0x85/0xd0 2010-08-26T10:57:12.127186-07:00 kaan-18 [ 4698.168724] [<ffffffff813c60c6>] netif_be_int+0x36/0x160 2010-08-26T10:57:12.127263-07:00 kaan-18 [ 4698.168730] [<ffffffff810e0ea0>] handle_IRQ_event+0x70/0x180 2010-08-26T10:57:12.127338-07:00 kaan-18 [ 4698.168736] [<ffffffff810e3c69>] handle_level_irq+0x99/0x140 2010-08-26T10:57:12.127433-07:00 kaan-18 [ 4698.168742] [<ffffffff813b8acf>] __xen_evtchn_do_upcall+0x1bf/0x1f0 2010-08-26T10:57:12.127509-07:00 kaan-18 [ 4698.168747] [<ffffffff813b908d>] xen_evtchn_do_upcall+0x3d/0x60 2010-08-26T10:57:12.127577-07:00 kaan-18 [ 4698.168754] [<ffffffff8101647e>] xen_do_hypervisor_callback+0x1e/0x30 2010-08-26T10:57:12.127584-07:00 kaan-18 [ 4698.168758] <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010 2010-08-26T10:57:12.127589-07:00 kaan-18 [ 4698.168769] [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010 2010-08-26T10:57:12.127595-07:00 kaan-18 [ 4698.168775] [<ffffffff813bcd04>] ? xb_write+0x1e4/0x290 2010-08-26T10:57:12.127650-07:00 kaan-18 [ 4698.168780] [<ffffffff813bd77a>] ? xs_talkv+0x6a/0x1f0 2010-08-26T10:57:12.127657-07:00 kaan-18 [ 4698.168785] [<ffffffff813bd788>] ? xs_talkv+0x78/0x1f0 2010-08-26T10:57:12.127662-07:00 kaan-18 [ 4698.168790] [<ffffffff813bda7d>] ? xs_single+0x4d/0x60 2010-08-26T10:57:12.127667-07:00 kaan-18 [ 4698.168795] [<ffffffff813be3b2>] ? xenbus_read+0x52/0x80 2010-08-26T10:57:12.127673-07:00 kaan-18 [ 4698.168800] [<ffffffff813c86ac>] ? frontend_changed+0x48c/0x770 2010-08-26T10:57:12.127678-07:00 kaan-18 [ 4698.168806] [<ffffffff813bf61d>] ? xenbus_otherend_changed+0xdd/0x1b0 2010-08-26T10:57:12.127684-07:00 kaan-18 [ 4698.168813] [<ffffffff81010e3f>] ? xen_restore_fl_direct_end+0x0/0x1 2010-08-26T10:57:12.127689-07:00 kaan-18 [ 4698.168818] [<ffffffff810ac610>] ? lock_release+0xb0/0x230 2010-08-26T10:57:12.127770-07:00 kaan-18 [ 4698.168823] [<ffffffff813bf990>] ? frontend_changed+0x10/0x20 2010-08-26T10:57:12.127777-07:00 kaan-18 [ 4698.168829] [<ffffffff813bd3a5>] ? xenwatch_thread+0x55/0x160 2010-08-26T10:57:12.127782-07:00 kaan-18 [ 4698.168836] [<ffffffff810932b0>] ? autoremove_wake_function+0x0/0x40 2010-08-26T10:57:12.127788-07:00 kaan-18 [ 4698.168841] [<ffffffff813bd350>] ? xenwatch_thread+0x0/0x160 2010-08-26T10:57:12.127793-07:00 kaan-18 [ 4698.168847] [<ffffffff81092f36>] ? kthread+0x96/0xb0 2010-08-26T10:57:12.127799-07:00 kaan-18 [ 4698.168852] [<ffffffff8101632a>] ? child_rip+0xa/0x20 2010-08-26T10:57:12.127804-07:00 kaan-18 [ 4698.168856] [<ffffffff81015c90>] ? restore_args+0x0/0x30 2010-08-26T10:57:12.137042-07:00 kaan-18 [ 4698.168861] [<ffffffff81016320>] ? child_rip+0x0/0x20 2010-08-26T10:57:12.137052-07:00 kaan-18 [ 4698.168865] ---[ end trace d11f63be69302ffc ]--- 2010-08-26T10:57:18.775160-07:00 kaan-18 [ 4704.813405] vif3.0: no IPv6 routers present ''cpu/8/availability'': ''online'', ''cpu/9/availability'': ''online'', ''image/suspend-cancel'': ''1'', ''cpu/5/availability'': ''online'', ''cpu/0/availability'': ''online'', ''image/features/pae-pgdir-above-4gb'': ''1'', ''cpu/6/availability'': ''online'', ''console/type'': ''xenconsoled'', ''cpu/12/availability'': ''online'', ''name'': ''dpm'', ''cpu/2/availability'': ''online'', ''domid'': ''3'', ''image/xen-version'': ''xen-3.0'', ''store/port'': ''1'', ''cpu/10/availability'': ''online''} [2010-08-26 10:57:09 1840] DEBUG (DevController:95) DevController: writing {''protocol'': ''x86_64-abi'', ''state'': ''1'', ''backend-id'': ''0'', ''backend'': ''/local/domain/0/backend/console/3/0''} to /local/domain/3/device/console/0. [2010-08-26 10:57:09 1840] DEBUG (DevController:97) DevController: writing {''domain'': ''dpm'', ''frontend'': ''/local/domain/3/device/console/0'', ''uuid'': ''460962e0-bc9b-f7de-b5f4-70715fb5d614'', ''frontend-id'': ''3'', ''state'': ''1'', ''location'': ''2'', ''online'': ''1'', ''protocol'': ''vt100''} to /local/domain/0/backend/console/3/0. [2010-08-26 10:57:09 1840] DEBUG (pciif:460) XendDomainInfo.handleAerStateWatch [2010-08-26 10:57:09 1840] DEBUG (XendDomainInfo:1891) XendDomainInfo.handleShutdownWatch [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices tap2. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vif. [2010-08-26 10:57:09 1840] DEBUG (DevController:144) Waiting for 0. [2010-08-26 10:57:09 1840] DEBUG (DevController:628) hotplugStatusCallback /local/domain/0/backend/vif/3/0/hotplug-status. [2010-08-26 10:57:09 1840] DEBUG (DevController:642) hotplugStatusCallback 1. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vkbd. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices ioports. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices tap. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vif2. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices console. [2010-08-26 10:57:09 1840] DEBUG (DevController:144) Waiting for 0. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vscsi. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vbd. [2010-08-26 10:57:09 1840] DEBUG (DevController:144) Waiting for 51712. [2010-08-26 10:57:09 1840] DEBUG (DevController:628) hotplugStatusCallback /local/domain/0/backend/vbd/3/51712/hotplug-status. [2010-08-26 10:57:09 1840] DEBUG (DevController:642) hotplugStatusCallback 1. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices irq. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vfb. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices pci. [2010-08-26 10:57:09 1840] DEBUG (DevController:144) Waiting for 0. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vusb. [2010-08-26 10:57:09 1840] DEBUG (DevController:139) Waiting for devices vtpm. [2010-08-26 10:57:09 1840] INFO (XendDomain:1206) Domain dpm (3) unpaused. ==> kern.log <=2010-08-26T10:57:12.085252-07:00 kaan-18 [ 4698.128356] blkback: ring-ref 8, event-channel 93, protocol 1 (x86_64-abi) 2010-08-26T10:57:12.126039-07:00 kaan-18 [ 4698.168555] ------------[ cut here ]------------ 2010-08-26T10:57:12.126141-07:00 kaan-18 [ 4698.168569] WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0x12f/0x190() 2010-08-26T10:57:12.126220-07:00 kaan-18 [ 4698.168574] Hardware name: ProLiant DL380 G6 2010-08-26T10:57:12.126297-07:00 kaan-18 [ 4698.168578] Modules linked in: xt_physdev xenfs xen_gntdev xen_evtchn ipv6 xen_pciback tun fbcon tileblit font bitblit softcursor bridge stp llc radeon ttm drm_kms_helper lp drm i2c_algo_bit ipmi_si joydev i2c_core ipmi_msghandler hpilo hpwdt parport serio_raw usbhid hid cciss usb_storage 2010-08-26T10:57:12.126405-07:00 kaan-18 [ 4698.168644] Pid: 11, comm: xenwatch Not tainted 2.6.32.18 #1 2010-08-26T10:57:12.126482-07:00 kaan-18 [ 4698.168648] Call Trace: 2010-08-26T10:57:12.126559-07:00 kaan-18 [ 4698.168651] <IRQ> [<ffffffff810a9f7f>] ? trace_hardirqs_on_caller+0x12f/0x190 2010-08-26T10:57:12.126655-07:00 kaan-18 [ 4698.168663] [<ffffffff8106bd60>] warn_slowpath_common+0x80/0xd0 2010-08-26T10:57:12.126731-07:00 kaan-18 [ 4698.168671] [<ffffffff815f3d70>] ? _spin_unlock_irq+0x30/0x40 2010-08-26T10:57:12.126806-07:00 kaan-18 [ 4698.168676] [<ffffffff8106bdc4>] warn_slowpath_null+0x14/0x20 2010-08-26T10:57:12.126881-07:00 kaan-18 [ 4698.168682] [<ffffffff810a9f7f>] trace_hardirqs_on_caller+0x12f/0x190 2010-08-26T10:57:12.126957-07:00 kaan-18 [ 4698.168688] [<ffffffff810a9fed>] trace_hardirqs_on+0xd/0x10 2010-08-26T10:57:12.127034-07:00 kaan-18 [ 4698.168693] [<ffffffff815f3d70>] _spin_unlock_irq+0x30/0x40 2010-08-26T10:57:12.127110-07:00 kaan-18 [ 4698.168700] [<ffffffff813c4e75>] add_to_net_schedule_list_tail+0x85/0xd0 2010-08-26T10:57:12.127186-07:00 kaan-18 [ 4698.168724] [<ffffffff813c60c6>] netif_be_int+0x36/0x160 2010-08-26T10:57:12.127263-07:00 kaan-18 [ 4698.168730] [<ffffffff810e0ea0>] handle_IRQ_event+0x70/0x180 2010-08-26T10:57:12.127338-07:00 kaan-18 [ 4698.168736] [<ffffffff810e3c69>] handle_level_irq+0x99/0x140 2010-08-26T10:57:12.127433-07:00 kaan-18 [ 4698.168742] [<ffffffff813b8acf>] __xen_evtchn_do_upcall+0x1bf/0x1f0 2010-08-26T10:57:12.127509-07:00 kaan-18 [ 4698.168747] [<ffffffff813b908d>] xen_evtchn_do_upcall+0x3d/0x60 2010-08-26T10:57:12.127577-07:00 kaan-18 [ 4698.168754] [<ffffffff8101647e>] xen_do_hypervisor_callback+0x1e/0x30 2010-08-26T10:57:12.127584-07:00 kaan-18 [ 4698.168758] <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010 2010-08-26T10:57:12.127589-07:00 kaan-18 [ 4698.168769] [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010 2010-08-26T10:57:12.127595-07:00 kaan-18 [ 4698.168775] [<ffffffff813bcd04>] ? xb_write+0x1e4/0x290 2010-08-26T10:57:12.127650-07:00 kaan-18 [ 4698.168780] [<ffffffff813bd77a>] ? xs_talkv+0x6a/0x1f0 2010-08-26T10:57:12.127657-07:00 kaan-18 [ 4698.168785] [<ffffffff813bd788>] ? xs_talkv+0x78/0x1f0 2010-08-26T10:57:12.127662-07:00 kaan-18 [ 4698.168790] [<ffffffff813bda7d>] ? xs_single+0x4d/0x60 2010-08-26T10:57:12.127667-07:00 kaan-18 [ 4698.168795] [<ffffffff813be3b2>] ? xenbus_read+0x52/0x80 2010-08-26T10:57:12.127673-07:00 kaan-18 [ 4698.168800] [<ffffffff813c86ac>] ? frontend_changed+0x48c/0x770 2010-08-26T10:57:12.127678-07:00 kaan-18 [ 4698.168806] [<ffffffff813bf61d>] ? xenbus_otherend_changed+0xdd/0x1b0 2010-08-26T10:57:12.127684-07:00 kaan-18 [ 4698.168813] [<ffffffff81010e3f>] ? xen_restore_fl_direct_end+0x0/0x1 2010-08-26T10:57:12.127689-07:00 kaan-18 [ 4698.168818] [<ffffffff810ac610>] ? lock_release+0xb0/0x230 2010-08-26T10:57:12.127770-07:00 kaan-18 [ 4698.168823] [<ffffffff813bf990>] ? frontend_changed+0x10/0x20 2010-08-26T10:57:12.127777-07:00 kaan-18 [ 4698.168829] [<ffffffff813bd3a5>] ? xenwatch_thread+0x55/0x160 2010-08-26T10:57:12.127782-07:00 kaan-18 [ 4698.168836] [<ffffffff810932b0>] ? autoremove_wake_function+0x0/0x40 2010-08-26T10:57:12.127788-07:00 kaan-18 [ 4698.168841] [<ffffffff813bd350>] ? xenwatch_thread+0x0/0x160 2010-08-26T10:57:12.127793-07:00 kaan-18 [ 4698.168847] [<ffffffff81092f36>] ? kthread+0x96/0xb0 2010-08-26T10:57:12.127799-07:00 kaan-18 [ 4698.168852] [<ffffffff8101632a>] ? child_rip+0xa/0x20 2010-08-26T10:57:12.127804-07:00 kaan-18 [ 4698.168856] [<ffffffff81015c90>] ? restore_args+0x0/0x30 2010-08-26T10:57:12.137042-07:00 kaan-18 [ 4698.168861] [<ffffffff81016320>] ? child_rip+0x0/0x20 2010-08-26T10:57:12.137052-07:00 kaan-18 [ 4698.168865] ---[ end trace d11f63be69302ffc ]--- 2010-08-26T10:57:18.775160-07:00 kaan-18 [ 4704.813405] vif3.0: no IPv6 routers present Possibly related, or not, domU is running of memory: boot looks normal until here: [ 0.659086] udev: starting version 151 [ 0.783917] Initialising Xen virtual ethernet driver. [ 0.805892] blkfront: xvda: barriers enabled [ 0.807202] xvda: xvda1 xvda2 < xvda5 > [ 1.048985] EXT4-fs (xvda1): mounted filesystem with ordered data mode [ 9.025260] Out of memory: kill process 633 (init) score 898 or a child [ 9.025413] Killed process 633 (init) [ 9.344581] Out of memory: kill process 534 (rsyslogd) score 428 or a child [ 9.344740] Killed process 629 (rsyslogd) [ 9.657456] Out of memory: kill process 424 (mountall) score 405 or a child [ 9.657613] Killed process 424 (mountall) -Bruce _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel