search for: 0x250

Displaying 20 results from an estimated 225 matches for "0x250".

Did you mean: 0x20
2012 Nov 26
1
kernel panic on Xen
...73051] 0000160000000000 ffff880000000000 ffff88001e8f1db0 ffffffff8145699a [ 100.973051] Call Trace: [ 100.973051] [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80 [ 100.973051] [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0 [ 100.973051] [<ffffffff8106090d>] __do_softirq+0xbd/0x250 [ 100.973051] [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170 [ 100.973051] [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250 [ 100.973051] [<ffffffff8107b0ac>] kthread+0x8c/0xa0 [ 100.973051] [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10 [ 100.973051] [<ffffff...
2019 Aug 02
1
nouveau problem
...ocaldomain kernel: [<ffffffffc057b4c7>] gf119_disp_core_fini+0x107/0x160 [nouveau] Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffffc0579423>] nv50_disp_chan_fini+0x23/0x40 [nouveau] Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffffc04fc0bf>] nvkm_object_fini+0xdf/0x250 [nouveau] Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffffc04fc078>] nvkm_object_fini+0x98/0x250 [nouveau] Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffffc04fc078>] nvkm_object_fini+0x98/0x250 [nouveau] Aug 02 14:19:42 localhost.localdomain kernel: [<ffffffffc04fc...
2018 Jan 05
4
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...ly) KERNEL supported cpus: (early) Intel GenuineIntel (early) AMD AuthenticAMD (early) Centaur CentaurHauls (early) 1 multicall(s) failed: cpu 0 (early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1 (early) Call Trace: (early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250 (early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0 (early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0 (early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133 (early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6 (early) [<ffffffff8107e0ee>...
2018 Jan 05
0
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...ly) KERNEL supported cpus: (early) Intel GenuineIntel (early) AMD AuthenticAMD (early) Centaur CentaurHauls (early) 1 multicall(s) failed: cpu 0 (early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1 (early) Call Trace: (early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250 (early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0 (early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0 (early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133 (early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6 (early) [<ffffffff8107e0ee>...
2014 Oct 20
2
INFO: task echo:622 blocked for more than 120 seconds. - 3.18.0-0.rc0.git
...t;] worker_thread+0x6b/0x4a0 [ 359.054444] [<ffffffff810cd3f0>] ? process_one_work+0x850/0x850 [ 359.054977] [<ffffffff810d37ab>] kthread+0x10b/0x130 [ 359.055521] [<ffffffff81028cc9>] ? sched_clock+0x9/0x10 [ 359.056054] [<ffffffff810d36a0>] ? kthread_create_on_node+0x250/0x250 [ 359.056600] [<ffffffff81862abc>] ret_from_fork+0x7c/0xb0 [ 359.057145] [<ffffffff810d36a0>] ? kthread_create_on_node+0x250/0x250 [ 359.057668] 4 locks held by kworker/u16:2/81: [ 359.058212] #0: ("%s""netns"){.+.+.+}, at: [<ffffffff810ccd1f>] pr...
2005 Apr 11
2
help please.
...>] ip_conntrack_init+0x255/0x369 [ip_conntrack] Apr 10 18:19:07 trinity kernel: [<f0d45c17>] init_or_cleanup+0x17/0x290 [ip_conntrack] Apr 10 18:19:07 trinity kernel: [<f0cdb00f>] init+0xf/0x20 [ip_conntrack] Apr 10 18:19:07 trinity kernel: [<c013d86a>] sys_init_module+0x18a/0x250 Apr 10 18:19:07 trinity kernel: [<c010328f>] syscall_call+0x7/0xb Apr 10 18:19:07 trinity kernel: BUG: using smp_processor_id() in preemptible [00000001] code: modprobe/11081 Apr 10 18:19:07 trinity kernel: caller is ip_conntrack_helper_register+0x18/0x170 [ip_conntrack] Apr 10 18:19:07 trin...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...4] __handle_domain_irq+0x84/0xf0 [ 162.493834] gic_handle_irq+0x58/0xa8 [ 162.498464] el1_irq+0xb4/0x130 [ 162.500621] arch_cpu_idle+0x18/0x28 [ 162.504729] default_idle_call+0x1c/0x34 [ 162.508005] do_idle+0x17c/0x1f0 [ 162.510184] cpu_startup_entry+0x20/0x28 [ 162.515050] rest_init+0x250/0x260 [ 162.518228] start_kernel+0x3f0/0x41c [ 162.522987] BUG: sleeping function called from invalid context at mm/mempool.c:320 [ 162.533762] in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0 [ 162.540375] INFO: lockdep is turned off. [ 162.542696] irq event stamp: 3670810 [ 16...
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...4] __handle_domain_irq+0x84/0xf0 [ 162.493834] gic_handle_irq+0x58/0xa8 [ 162.498464] el1_irq+0xb4/0x130 [ 162.500621] arch_cpu_idle+0x18/0x28 [ 162.504729] default_idle_call+0x1c/0x34 [ 162.508005] do_idle+0x17c/0x1f0 [ 162.510184] cpu_startup_entry+0x20/0x28 [ 162.515050] rest_init+0x250/0x260 [ 162.518228] start_kernel+0x3f0/0x41c [ 162.522987] BUG: sleeping function called from invalid context at mm/mempool.c:320 [ 162.533762] in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0 [ 162.540375] INFO: lockdep is turned off. [ 162.542696] irq event stamp: 3670810 [ 16...
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and
2018 Jan 06
0
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
...gt;(early) Intel GenuineIntel >(early) AMD AuthenticAMD >(early) Centaur CentaurHauls >(early) 1 multicall(s) failed: cpu 0 >(early) Pid: 0, comm: swapper Not tainted 2.6.32-696.18.7.el6.x86_64 #1 >(early) Call Trace: >(early) [<ffffffff81004843>] ? xen_mc_flush+0x1c3/0x250 >(early) [<ffffffff81006b9e>] ? xen_extend_mmu_update+0xde/0x1b0 >(early) [<ffffffff81006fcd>] ? xen_set_pmd_hyper+0x9d/0xc0 >(early) [<ffffffff81c5e8ac>] ? early_ioremap_init+0x98/0x133 >(early) [<ffffffff81c45221>] ? setup_arch+0x40/0xca6 >(early) [<...
2018 Feb 26
0
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
...gt; [ 162.493834] gic_handle_irq+0x58/0xa8 > [ 162.498464] el1_irq+0xb4/0x130 > [ 162.500621] arch_cpu_idle+0x18/0x28 > [ 162.504729] default_idle_call+0x1c/0x34 > [ 162.508005] do_idle+0x17c/0x1f0 > [ 162.510184] cpu_startup_entry+0x20/0x28 > [ 162.515050] rest_init+0x250/0x260 > [ 162.518228] start_kernel+0x3f0/0x41c > [ 162.522987] BUG: sleeping function called from invalid context at mm/mempool.c:320 > [ 162.533762] in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0 > [ 162.540375] INFO: lockdep is turned off. > [ 162.542696] irq e...
2005 Oct 26
4
ubuntu 5.1 & wine 0.9
...stub fixme:ttydrv:TTYDRV_GetBitmapBits (0x23c, 0x7bc50c94, 128): stub fixme:ttydrv:TTYDRV_GetBitmapBits (0x248, 0x7bc50d84, 32): stub fixme:ttydrv:TTYDRV_GetBitmapBits (0x244, 0x7bc50da4, 32): stub err:imagelist:ImageList_ReplaceIcon no color! fixme:ttydrv:TTYDRV_DC_StretchBlt (0x1d4, 0, 0, 16, 16, 0x250, 0, 0, 16, 32, 13369376): stub fixme:ttydrv:TTYDRV_DC_StretchBlt (0x1dc, 0, 0, 16, 16, 0x250, 0, 0, 16, 32, 13369376): stub fixme:ttydrv:TTYDRV_GetBitmapBits (0x25c, 0x7bc50d84, 128): stub fixme:ttydrv:TTYDRV_GetBitmapBits (0x258, 0x7bc50e04, 128): stub err:imagelist:ImageList_ReplaceIcon no colo...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2] As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, an...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2] As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, an...
2012 Nov 16
5
[ 3009.778974] mcelog:16842 map pfn expected mapping type write-back for [mem 0x0009f000-0x000a0fff], got uncached-minus
...[<ffffffff81064e72>] mmput+0x52/0xd0 [ 3009.924252] [<ffffffff810652b7>] dup_mm+0x3c7/0x510 [ 3009.933839] [<ffffffff81065fd5>] copy_process+0xac5/0x14a0 [ 3009.943430] [<ffffffff81066af3>] do_fork+0x53/0x360 [ 3009.952843] [<ffffffff810b25c7>] ? lock_release+0x117/0x250 [ 3009.962283] [<ffffffff817d26c0>] ? _raw_spin_unlock+0x30/0x60 [ 3009.971532] [<ffffffff817d3495>] ? sysret_check+0x22/0x5d [ 3009.980820] [<ffffffff81017523>] sys_clone+0x23/0x30 [ 3009.990046] [<ffffffff817d37f3>] stub_clone+0x13/0x20 [ 3009.999335] [<ffffffff817...
2012 Sep 12
2
Deadlock in btrfs-cleaner, related to snapshot deletion
....318213] ffffffff81c14440 ffff88040b3496d0 ffff88040c34b6b0 ffff880405a04408 [ 386.318220] Call Trace: [ 386.318236] [<ffffffff81139f03>] ? activate_page+0x83/0xa0 [ 386.318245] [<ffffffff8169d0e9>] schedule+0x29/0x70 [ 386.318305] [<ffffffffa011b29d>] btrfs_tree_lock+0xcd/0x250 [btrfs] [ 386.318314] [<ffffffff8107ccd0>] ? add_wait_queue+0x60/0x60 [ 386.318348] [<ffffffffa00dc4f8>] btrfs_init_new_buffer+0x68/0x140 [btrfs] [ 386.318379] [<ffffffffa00dc66f>] btrfs_alloc_free_block+0x9f/0x220 [btrfs] [ 386.318408] [<ffffffffa00c83b2>] __btrfs_c...
2019 Dec 23
5
[PATCH net] virtio_net: CTRL_GUEST_OFFLOADS depends on CTRL_VQ
...tnet_set_features+0x90/0xf0 [virtio_net] > > __netdev_update_features+0x271/0x980 > > ? nlmsg_notify+0x5b/0xa0 > > dev_disable_lro+0x2b/0x190 > > ? inet_netconf_notify_devconf+0xe2/0x120 > > devinet_sysctl_forward+0x176/0x1e0 > > proc_sys_call_handler+0x1f0/0x250 > > proc_sys_write+0xf/0x20 > > __vfs_write+0x3e/0x190 > > ? __sb_start_write+0x6d/0xd0 > > vfs_write+0xd3/0x190 > > ksys_write+0x68/0xd0 > > __ia32_sys_write+0x14/0x20 > > do_fast_syscall_32+0x86/0xe0 > > entry_SYSENTER_compat+0x7c/0x8e >...
2019 Dec 23
5
[PATCH net] virtio_net: CTRL_GUEST_OFFLOADS depends on CTRL_VQ
...tnet_set_features+0x90/0xf0 [virtio_net] > > __netdev_update_features+0x271/0x980 > > ? nlmsg_notify+0x5b/0xa0 > > dev_disable_lro+0x2b/0x190 > > ? inet_netconf_notify_devconf+0xe2/0x120 > > devinet_sysctl_forward+0x176/0x1e0 > > proc_sys_call_handler+0x1f0/0x250 > > proc_sys_write+0xf/0x20 > > __vfs_write+0x3e/0x190 > > ? __sb_start_write+0x6d/0xd0 > > vfs_write+0xd3/0x190 > > ksys_write+0x68/0xd0 > > __ia32_sys_write+0x14/0x20 > > do_fast_syscall_32+0x86/0xe0 > > entry_SYSENTER_compat+0x7c/0x8e >...
2019 Jun 14
0
[PATCH v2] drm/nouveau/dmem: missing mutex_lock in error path
...drm_ioctl+0x308/0x530 [ 1295.063384] ? drm_version+0x150/0x150 [ 1295.067153] ? find_held_lock+0xac/0xd0 [ 1295.070996] ? __pm_runtime_resume+0x3f/0xa0 [ 1295.075285] ? mark_held_locks+0x29/0xa0 [ 1295.079230] ? _raw_spin_unlock_irqrestore+0x3c/0x50 [ 1295.084232] ? lockdep_hardirqs_on+0x17d/0x250 [ 1295.088768] nouveau_drm_ioctl+0x9a/0x100 [nouveau] [ 1295.093661] do_vfs_ioctl+0x137/0x9a0 [ 1295.097341] ? ioctl_preallocate+0x140/0x140 [ 1295.101623] ? match_held_lock+0x1b/0x230 [ 1295.105646] ? match_held_lock+0x1b/0x230 [ 1295.109660] ? find_held_lock+0xac/0xd0 [ 1295.113512] ? __do...
2019 Jul 26
0
[PATCH AUTOSEL 5.2 85/85] drm/nouveau/dmem: missing mutex_lock in error path
...drm_ioctl+0x308/0x530 [ 1295.063384] ? drm_version+0x150/0x150 [ 1295.067153] ? find_held_lock+0xac/0xd0 [ 1295.070996] ? __pm_runtime_resume+0x3f/0xa0 [ 1295.075285] ? mark_held_locks+0x29/0xa0 [ 1295.079230] ? _raw_spin_unlock_irqrestore+0x3c/0x50 [ 1295.084232] ? lockdep_hardirqs_on+0x17d/0x250 [ 1295.088768] nouveau_drm_ioctl+0x9a/0x100 [nouveau] [ 1295.093661] do_vfs_ioctl+0x137/0x9a0 [ 1295.097341] ? ioctl_preallocate+0x140/0x140 [ 1295.101623] ? match_held_lock+0x1b/0x230 [ 1295.105646] ? match_held_lock+0x1b/0x230 [ 1295.109660] ? find_held_lock+0xac/0xd0 [ 1295.113512] ? __do...