search for: 0x2c0

Displaying 20 results from an estimated 186 matches for "0x2c0".

Did you mean: 0x20
2006 Jun 26
5
[Bug 339] Kernel panic on bridged packet
...00000c printing eip: c0359b67 *pde = 00000000 Oops: 0000 [#1] Modules linked in: ebt_ip ebtable_filter ebtbles rtc sata_nv nvsound sata_sil nvnet eepro100 3c59x r8169 CPU: 0 EIP: 0060:[<c0359b67>] Tainted: P VLI EFLAGS: 00010282 (2.6.15) EIP i at br_nf_pre_routing_finish+0x1a/0x2c0 eax: 00000000 ebx: 00000000 ecx: c0475e9c edx: dfb2c5a0 esi: cv760820 edi: 80000000 ebp: deefe000 esp: c0475de0 ds: 007b es: 007b ss: 0068 Process swapper (pid: , threadinfo=c0474000 task=c03fbb00) Stack: c0475e9c c8c6f082 c04d4020 c0475e9c fb0000e0 c0345127 00000000 c0475e9c...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
...er several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2] As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below, The kernel version is 3.13.6. Aug 20 10:05:43 server211 kernel: [82025.281828] T...
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
...er several times, and the log is always, we have to reboot all the node of the cluster to avoid it. Is there any patch that had fix this bug? [<ffffffff817539a5>] schedule_timeout+0x1e5/0x250 [<ffffffff81755a77>] wait_for_completion+0xa7/0x160 [<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0 [<ffffffffa0564063>] __ocfs2_cluster_lock.isra.30+0x1f3/0x820 [ocfs2] As we test with a lot of node in one cluster, may be ten or twenty nodes, the cluster is always blocked, and the log is below, The kernel version is 3.13.6. Aug 20 10:05:43 server211 kernel: [82025.281828] T...
2004 Jun 10
1
ext3 EIP
...e+0/96] ext3_releasepage+0x0/0x60 Jun 10 10:28:59 shawarma kernel: [try_to_release_page+56/96] try_to_release_page+0x38/0x60 Jun 10 10:28:59 shawarma kernel: [shrink_list+826/1056] shrink_list+0x33a/0x420 Jun 10 10:28:59 shawarma kernel: [ext3_get_block_handle+126/704] ext3_get_block_handle+0x7e/0x2c0 Jun 10 10:28:59 shawarma kernel: [ext3_get_block_handle+453/704] ext3_get_block_handle+0x1c5/0x2c0 Jun 10 10:28:59 shawarma kernel: [__ide_dma_begin+34/64] __ide_dma_begin+0x22/0x40 Jun 10 10:28:59 shawarma kernel: [shrink_cache+314/768] shrink_cache+0x13a/0x300 Jun 10 10:28:59 shawarma kernel:...
2019 Jul 01
1
[PATCH] drm/nouveau: fix memory leak in nouveau_conn_reset()
...rker/0:2", pid 188, jiffies 4294695279 (age 53.179s) hex dump (first 32 bytes): 00 f0 ba 7b 54 8c ff ff 00 00 00 00 00 00 00 00 ...{T........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<000000005005c0d0>] kmem_cache_alloc_trace+0x195/0x2c0 [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau] [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau] [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau] [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau]...
2005 Jun 02
0
RE: Badness in softirq.c / no modules loaded / relatedtonetwork interface
...ostname is ftp, directory is /xen/mount/ftp) Output from umount: Jun 2 12:13:16 zen kernel: Badness in local_bh_enable at kernel/softirq.c:140 Jun 2 12:13:16 zen kernel: [local_bh_enable+130/144] local_bh_enable+0x82/0x90 Jun 2 12:13:16 zen kernel: [skb_checksum+317/704] skb_checksum+0x13d/0x2c0 Jun 2 12:13:16 zen kernel: [udp_poll+154/352] udp_poll+0x9a/0x160 Jun 2 12:13:16 zen kernel: [sock_poll+41/64] sock_poll+0x29/0x40 Jun 2 12:13:16 zen kernel: [do_pollfd+149/160] do_pollfd+0x95/0xa0 Jun 2 12:13:16 zen kernel: [do_poll+106/208] do_poll+0x6a/0xd0 Jun 2 12:13:16 zen kernel: [...
2013 Jun 04
2
vhost && kernel BUG at /build/linux/mm/slub.c:3352!
...ID: 29175 Comm: trinity-main Not tainted 3.10.0-rc4 #1 [179906.407692] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [179906.411475] task: ffff8800b69e47c0 ti: ffff880092f2e000 task.ti: ffff880092f2e000 [179906.416305] RIP: 0010:[<ffffffff81225255>] [<ffffffff81225255>] kfree+0x155/0x2c0 [179906.421462] RSP: 0000:ffff880092f2fdb0 EFLAGS: 00010246 [179906.424983] RAX: 0100000000000000 RBX: ffff88009e588000 RCX: 0000000000000000 [179906.429746] RDX: ffff8800b69e47c0 RSI: 00000000000a0004 RDI: ffff88009e588000 [179906.434499] RBP: ffff880092f2fdd8 R08: 0000000000000001 R09: 000000000...
2013 Jun 04
2
vhost && kernel BUG at /build/linux/mm/slub.c:3352!
...ID: 29175 Comm: trinity-main Not tainted 3.10.0-rc4 #1 [179906.407692] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 [179906.411475] task: ffff8800b69e47c0 ti: ffff880092f2e000 task.ti: ffff880092f2e000 [179906.416305] RIP: 0010:[<ffffffff81225255>] [<ffffffff81225255>] kfree+0x155/0x2c0 [179906.421462] RSP: 0000:ffff880092f2fdb0 EFLAGS: 00010246 [179906.424983] RAX: 0100000000000000 RBX: ffff88009e588000 RCX: 0000000000000000 [179906.429746] RDX: ffff8800b69e47c0 RSI: 00000000000a0004 RDI: ffff88009e588000 [179906.434499] RBP: ffff880092f2fdd8 R08: 0000000000000001 R09: 000000000...
2012 Oct 24
1
Nouveau soft lockup after switcheroo'd...
...[nouveau] [<ffffffffa037bfec>] nouveau_timer_wait_eq+0x7c/0xe0 [nouveau] [<ffffffffa03f4f4e>] nvd0_sor_dpms+0xde/0x1a0 [nouveau] [<ffffffff813871d9>] ? fb_set_var+0xe9/0x3a0 [<ffffffff811554a9>] ? __pte_alloc+0xa9/0x160 [<ffffffffa03f4e70>] ? nvd0_sor_dp_link_set+0x2c0/0x2c0 [nouveau] [<ffffffffa00b2a5c>] drm_helper_connector_dpms+0xbc/0x100 [drm_kms_helper] [<ffffffffa00b1665>] drm_fb_helper_dpms.isra.13+0xa5/0xf0 [drm_kms_helper] [<ffffffffa00b16f9>] drm_fb_helper_blank+0x49/0x80 [drm_kms_helper] [<ffffffff81386e16>] fb_blank+0x56/0x...
2020 Oct 13
1
Nouveau DRM failure on 5120x1440 screen with 5.8/5.9 kernel
I'm having a problem with both the 5.8 and 5.9 kernels using the nouveau DRM driver. I have a laptop with a VGA card (specs below) connected to a 5120x1440 screen. At boot time, the card correctly detects the screen, tries to allocate fbdev fb0, then the video hangs completely for 15-30 seconds until it goes blank. This used to work in Linux 5.7 and earlier, although it allocated a 3840x1080
2013 Jun 20
3
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...ex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff816e455e>] kmemleak_alloc+0x5e/0xc0 [<ffffffff811aa7f1>] kmem_cache_alloc_node_trace+0x141/0x2c0 [<ffffffff8133ba23>] alloc_cpumask_var_node+0x23/0x80 [<ffffffff8133ba8e>] alloc_cpumask_var+0xe/0x10 [<ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810 [<ffffffff813fe171>] vp_find_vqs+0x81/0xb0 [<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_bal...
2013 Jun 20
3
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...ex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff816e455e>] kmemleak_alloc+0x5e/0xc0 [<ffffffff811aa7f1>] kmem_cache_alloc_node_trace+0x141/0x2c0 [<ffffffff8133ba23>] alloc_cpumask_var_node+0x23/0x80 [<ffffffff8133ba8e>] alloc_cpumask_var+0xe/0x10 [<ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810 [<ffffffff813fe171>] vp_find_vqs+0x81/0xb0 [<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_bal...
2004 Oct 01
6
rsync 2.6.3 hang (was rsync 2.6.2 crash)
...main] rsync 2472 socket_cleanup: returning 14:04:19 [main] rsync 2472 peek_socket: considering handle 0x308 14:04:19 [main] rsync 2472 peek_socket: adding write fd_set , fd 4 14:04:19 [main] rsync 2472 peek_socket: WINSOCK_SELECT returned 0 14:04:19 [main] rsync 2472 peek_socket: considering handle 0x2C0 14:04:19 [main] rsync 2472 peek_socket: adding read fd_set , fd 3 14:04:19 [main] rsync 2472 peek_socket: WINSOCK_SELECT returned 0 14:04:19 [main] rsync 2472 select_stuff::poll: returning 0 14:04:19 [main] rsync 2472 select_stuff::cleanup: calling cleanup routines 14:04:19 [main] rsync 2472 select...
2014 Dec 02
3
xen-c6 fails to boot
> -----Original Message----- > From: Johnny Hughes > On 12/01/2014 04:48 AM, Bob Ball wrote: > > > > [<ffffffff81575480>] panic+0xc4/0x1e1 > > [<ffffffff81054836>] find_new_reaper+0x176/0x180 > > [<ffffffff81055345>] forget_original_parent+0x45/0x2c0 > > [<ffffffff81107214>] ? task_function_call+0x44/0x50 > > [<ffffffff810555d7>] exit_notify+0x17/0x140 > > [<ffffffff81057053>] do_exit+0x1f3/0x450 > > [<ffffffff81057305>] do_group_exit+0x55/0xd0 > > [<ffffffff81057397>] sys_exit_gr...
2012 Jul 30
4
balance disables nodatacow
I have a 3 disk raid1 filesystem mounted with nodatacow. I have a folder in said filesystem with the ''C'' NOCOW & ''Z'' Not_Compressed flags set for good measure. I then copy in a large file and proceed to make random modifications. Filefrag shows no additional extents created, good so far. A big thank you to the those devs who got that working. However, after
2014 Dec 01
2
xen-c6 fails to boot
...n and modifying the boot line there is a kernel panic during the boot process causing the host to enter a reboot loop. Console log attached. [<ffffffff81575480>] panic+0xc4/0x1e1 [<ffffffff81054836>] find_new_reaper+0x176/0x180 [<ffffffff81055345>] forget_original_parent+0x45/0x2c0 [<ffffffff81107214>] ? task_function_call+0x44/0x50 [<ffffffff810555d7>] exit_notify+0x17/0x140 [<ffffffff81057053>] do_exit+0x1f3/0x450 [<ffffffff81057305>] do_group_exit+0x55/0xd0 [<ffffffff81057397>] sys_exit_group+0x17/0x20 [<ffffffff815806a9>] system_c...
2013 Jun 19
2
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...ex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff816e455e>] kmemleak_alloc+0x5e/0xc0 [<ffffffff811aa7f1>] kmem_cache_alloc_node_trace+0x141/0x2c0 [<ffffffff8133ba23>] alloc_cpumask_var_node+0x23/0x80 [<ffffffff8133ba8e>] alloc_cpumask_var+0xe/0x10 [<ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810 [<ffffffff813fe171>] vp_find_vqs+0x81/0xb0 [<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_bal...
2013 Jun 19
2
[PATCH] virtio-pci: fix leaks of msix_affinity_masks
...ex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff816e455e>] kmemleak_alloc+0x5e/0xc0 [<ffffffff811aa7f1>] kmem_cache_alloc_node_trace+0x141/0x2c0 [<ffffffff8133ba23>] alloc_cpumask_var_node+0x23/0x80 [<ffffffff8133ba8e>] alloc_cpumask_var+0xe/0x10 [<ffffffff813fdb3d>] vp_try_to_find_vqs+0x25d/0x810 [<ffffffff813fe171>] vp_find_vqs+0x81/0xb0 [<ffffffffa00d2a05>] init_vqs+0x85/0x120 [virtio_bal...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...837] xfsaild/dm-10 D ffff93203a2eeeb0 0 1061 2 0x00000000 [ 8280.188843] Call Trace: [ 8280.188857] [<ffffffff960a3a2e>] ? try_to_del_timer_sync+0x5e/0x90 [ 8280.188864] [<ffffffff96713f79>] schedule+0x29/0x70 [ 8280.188932] [<ffffffffc049fe36>] _xfs_log_force+0x1c6/0x2c0 [xfs] [ 8280.188939] [<ffffffff960cf1b0>] ? wake_up_state+0x20/0x20 [ 8280.188972] [<ffffffffc04abfec>] ? xfsaild+0x16c/0x6f0 [xfs] [ 8280.189003] [<ffffffffc049ff5c>] xfs_log_force+0x2c/0x70 [xfs] [ 8280.189035] [<ffffffffc04abe80>] ? xfs_trans_ail_cursor_first+0x90/0x9...
2020 Jan 09
1
[BUG] nouveau lockdep splat
...quire.part.101+0x29/0x30 [ 98.568312] kmem_cache_alloc_trace+0x3f/0x350 [ 98.573356] nvkm_vma_tail+0x70/0x150 [nouveau] [ 98.578488] nvkm_vmm_get_locked+0x42e/0x740 [nouveau] [ 98.584217] nvkm_uvmm_mthd+0x6de/0xbe0 [nouveau] [ 98.589521] nvkm_ioctl+0x18b/0x2c0 [nouveau] [ 98.594470] nvif_object_mthd+0x18b/0x1b0 [nouveau] [ 98.599938] nvif_vmm_get+0x124/0x170 [nouveau] [ 98.605083] nouveau_vma_new+0x356/0x3e0 [nouveau] [ 98.610473] nouveau_channel_prep+0x387/0x4a0 [nouveau] [ 98.616296] nouveau_channel_new+0xf7...