Displaying 20 results from an estimated 359 matches for "0x150".
Did you mean:
0x10
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
...tack+0x4d/0x66
[<ffffffff810d4f14>] __might_sleep+0x184/0x240
[<ffffffff8180caf2>] mutex_lock_nested+0x42/0x440
[<ffffffff810e1de6>] ? local_clock+0x16/0x30
[<ffffffff810fc23f>] ? lock_release_holdtime.part.28+0xf/0x200
[<ffffffff812d76a0>] kernfs_notify+0x90/0x150
[<ffffffff8163377c>] bitmap_endwrite+0xcc/0x240
[<ffffffffa00de863>] close_write+0x93/0xb0 [raid1]
[<ffffffffa00df029>] r1_bio_write_done+0x29/0x50 [raid1]
[<ffffffffa00e0474>] raid1_end_write_request+0xe4/0x260 [raid1]
[<ffffffff813acb8b>] bio_endio+0x6b/0xa...
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
...tack+0x4d/0x66
[<ffffffff810d4f14>] __might_sleep+0x184/0x240
[<ffffffff8180caf2>] mutex_lock_nested+0x42/0x440
[<ffffffff810e1de6>] ? local_clock+0x16/0x30
[<ffffffff810fc23f>] ? lock_release_holdtime.part.28+0xf/0x200
[<ffffffff812d76a0>] kernfs_notify+0x90/0x150
[<ffffffff8163377c>] bitmap_endwrite+0xcc/0x240
[<ffffffffa00de863>] close_write+0x93/0xb0 [raid1]
[<ffffffffa00df029>] r1_bio_write_done+0x29/0x50 [raid1]
[<ffffffffa00e0474>] raid1_end_write_request+0xe4/0x260 [raid1]
[<ffffffff813acb8b>] bio_endio+0x6b/0xa...
2013 Jul 31
3
[PATCH] virtio-scsi: Fix virtqueue affinity setup
...+0x26/0x50
[<ffffffff8179c7d2>] virtscsi_remove+0x82/0xa0
[<ffffffff814cb6b2>] virtio_dev_remove+0x22/0x70
[<ffffffff8167ca49>] __device_release_driver+0x69/0xd0
[<ffffffff8167cb9d>] device_release_driver+0x2d/0x40
[<ffffffff8167bb96>] bus_remove_device+0x116/0x150
[<ffffffff81679936>] device_del+0x126/0x1e0
[<ffffffff81679a06>] device_unregister+0x16/0x30
[<ffffffff814cb889>] unregister_virtio_device+0x19/0x30
[<ffffffff814cdad6>] virtio_pci_remove+0x36/0x80
[<ffffffff81464ae7>] pci_device_remove+0x37/0x70
[<fffff...
2013 Jul 31
3
[PATCH] virtio-scsi: Fix virtqueue affinity setup
...+0x26/0x50
[<ffffffff8179c7d2>] virtscsi_remove+0x82/0xa0
[<ffffffff814cb6b2>] virtio_dev_remove+0x22/0x70
[<ffffffff8167ca49>] __device_release_driver+0x69/0xd0
[<ffffffff8167cb9d>] device_release_driver+0x2d/0x40
[<ffffffff8167bb96>] bus_remove_device+0x116/0x150
[<ffffffff81679936>] device_del+0x126/0x1e0
[<ffffffff81679a06>] device_unregister+0x16/0x30
[<ffffffff814cb889>] unregister_virtio_device+0x19/0x30
[<ffffffff814cdad6>] virtio_pci_remove+0x36/0x80
[<ffffffff81464ae7>] pci_device_remove+0x37/0x70
[<fffff...
2014 Jun 29
0
virt_blk BUG: sleeping function called from invalid context
...t;ffffffff810d4f14>] __might_sleep+0x184/0x240
> [<ffffffff8180caf2>] mutex_lock_nested+0x42/0x440
> [<ffffffff810e1de6>] ? local_clock+0x16/0x30
> [<ffffffff810fc23f>] ? lock_release_holdtime.part.28+0xf/0x200
> [<ffffffff812d76a0>] kernfs_notify+0x90/0x150
> [<ffffffff8163377c>] bitmap_endwrite+0xcc/0x240
> [<ffffffffa00de863>] close_write+0x93/0xb0 [raid1]
> [<ffffffffa00df029>] r1_bio_write_done+0x29/0x50 [raid1]
> [<ffffffffa00e0474>] raid1_end_write_request+0xe4/0x260 [raid1]
> [<ffffffff813acb8...
2023 Dec 09
2
BUG: KFENCE: memory corruption in free_async+0x1d8/0x1e0
...0
[21963.079728] usbdev_ioctl+0x138/0x1c40
[21963.079744] __arm64_sys_ioctl+0xd0/0x130
[21963.079769] invoke_syscall+0x7c/0x130
[21963.079793] el0_svc_common.constprop.0+0x6c/0x160
[21963.079815] do_el0_svc+0x38/0x120
[21963.079835] el0_svc+0x34/0xc0
[21963.079856] el0t_64_sync_handler+0x11c/0x150
[21963.079876] el0t_64_sync+0x198/0x19c
[21963.079892]
[21963.079899] kfence-#183: 0x0000000070088b17-0x00000000bed184b6, size=5,
cache=kmalloc-128
[21963.079899]
[21963.079916] allocated by task 1647 on cpu 2 at 21963.076359s:
[21963.079946] proc_do_submiturb+0xdb0/0x1000
[21963.079962] usbdev_...
2011 Jun 26
2
[LLVMdev] Can LLVM jitter emit the native code in continuous memory addresses ?
...h contain:
Global variables
----------------
Function Foo()
----------------
Function Too()
when i request the JIT code i want the JIT to be in continuous memory
addresses (with 4-bytes aligment):
0x100: Global Vars (take 16 Byte)
0x110: Foo() Code (take 32 Byte)
0x130: Too() Code (take 32 Byte)
0x150: end.
So i can save the JIT code (form 0x100 -> 0x150) and i can load it in
the execution process in any virtual address,
assume (0x300 -> 0x350) and execute it by jump to the ep.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/piperm...
2018 Mar 01
0
UBSAN warning in nouveau_bios.c:1528:8
...dump_stack+0x5a/0x99
[ 8.015500] ubsan_epilogue+0x9/0x40
[ 8.015503] __ubsan_handle_shift_out_of_bounds+0x124/0x160
[ 8.015506] ? _dev_info+0x67/0x90
[ 8.015509] ? dev_printk_emit+0x49/0x70
[ 8.015632] parse_dcb_entry+0x91e/0xd90 [nouveau]
[ 8.015712] ? parse_bit_M_tbl_entry+0x150/0x150 [nouveau]
[ 8.015791] olddcb_outp_foreach+0x66/0xa0 [nouveau]
[ 8.015870] nouveau_bios_init+0x23a/0x2250 [nouveau]
[ 8.015950] ? nouveau_ttm_init+0x3a4/0x710 [nouveau]
[ 8.016029] nouveau_drm_load+0x229/0xf10 [nouveau]
[ 8.016033] ? sysfs_do_create_link_sd+0xa6/0x170
[...
2017 Mar 25
1
NVAC - BUG: unable to handle kernel NULL pointer dereference
...Modules linked in: ... nouveau ...
CPU: 0 PID: 6895 Comm: Xorg Not tainted 4.10.5-1001.fc24.x86_64 #1
...
Call Trace:
drm_atomic_helper_wait_for_fences+0x48/0x120 [drm_kms_helper]
nv50_disp_atomic_commit+0x19c/0x2a0 [nouveau]
drm_atomic_commit+0x4b/0x50 [drm]
drm_atomic_helper_update_plane+0xec/0x150 [drm_kms_helper]
__setplane_internal+0x1b4/0x280 [drm]
drm_mode_cursor_universal+0x126/0x210 [drm]
drm_mode_cursor_common+0x86/0x180 [drm]
drm_mode_cursor_ioctl+0x50/0x70 [drm]
drm_ioctl+0x21b/0x4c0 [drm]
? drm_mode_setplane+0x1a0/0x1a0 [drm]
nouveau_drm_ioctl+0x74/0xc0 [nouveau]
do_vfs_ioc...
2011 Jun 27
0
[LLVMdev] Can LLVM jitter emit the native code in continuous memory addresses ?
...nction Foo()
> ----------------
> Function Too()
>
> when i request the JIT code i want the JIT to be in continuous memory
> addresses (with 4-bytes aligment):
>
> 0x100: Global Vars (take 16 Byte)
> 0x110: Foo() Code (take 32 Byte)
> 0x130: Too() Code (take 32 Byte)
> 0x150: end.
>
> So i can save the JIT code (form 0x100 -> 0x150) and i can load it in the execution process in any virtual address,
> assume (0x300 -> 0x350) and execute it by jump to the ep.
>
>
> _______________________________________________
> LLVM Developers mailing list
&...
2020 Oct 23
0
kvm+nouveau induced lockdep gripe
...0x10b/0x240 [nouveau]
[ 70.135506] nvkm_udevice_init+0x49/0x70 [nouveau]
[ 70.135531] nvkm_object_init+0x3d/0x180 [nouveau]
[ 70.135555] nvkm_ioctl_new+0x1a1/0x260 [nouveau]
[ 70.135578] nvkm_ioctl+0x10a/0x240 [nouveau]
[ 70.135600] nvif_object_ctor+0xeb/0x150 [nouveau]
[ 70.135622] nvif_device_ctor+0x1f/0x60 [nouveau]
[ 70.135668] nouveau_cli_init+0x1ac/0x590 [nouveau]
[ 70.135711] nouveau_drm_device_init+0x68/0x800 [nouveau]
[ 70.135753] nouveau_drm_probe+0xfb/0x200 [nouveau]
[ 70.135761] local_pci_probe+0x4...
2023 May 23
1
[PATCH v2] ocfs2: fix use-after-free when unmounting read-only filesystem
...gacy_get_tree+0x6c/0xb0
vfs_get_tree+0x3e/0x110
path_mount+0xa90/0xe10
__x64_sys_mount+0x16f/0x1a0
do_syscall_64+0x43/0x90
entry_SYSCALL_64_after_hwframe+0x72/0xdc
Freed by task 650:
kasan_save_stack+0x1c/0x40
kasan_set_track+0x21/0x30
kasan_save_free_info+0x2a/0x50
__kasan_slab_free+0xf9/0x150
__kmem_cache_free+0x89/0x180
ocfs2_local_free_info+0x2ba/0x3f0 [ocfs2]
dquot_disable+0x35f/0xa70
ocfs2_susp_quotas.isra.0+0x159/0x1a0 [ocfs2]
ocfs2_remount+0x150/0x580 [ocfs2]
reconfigure_super+0x1a5/0x3a0
path_mount+0xc8a/0xe10
__x64_sys_mount+0x16f/0x1a0
do_syscall_64+0x43/0x90
entry_SY...
2012 Feb 06
1
Unknown KERNEL Warning in boot messages
...tion+0x0/0x20
[<ffffffffa00747d1>] ? sync_request+0x541/0xa70 [raid10]
[<ffffffffa0073599>] ? raid10_unplug+0x29/0x30 [raid10]
[<ffffffff813ea428>] ? is_mddev_idle+0xc8/0x120
[<ffffffff813eab6d>] ? md_do_sync+0x6ad/0xbe0
[<ffffffff813eb336>] ? md_thread+0x116/0x150
[<ffffffff813eb220>] ? md_thread+0x0/0x150
[<ffffffff810906a6>] ? kthread+0x96/0xa0
[<ffffffff8100c14a>] ? child_rip+0xa/0x20
[<ffffffff81090610>] ? kthread+0x0/0xa0
[<ffffffff8100c140>] ? child_rip+0x0/0x20
---[ end trace a7919e7f17c0a727 ]---
md: md1: data-...
2013 Feb 13
0
Re: Heavy memory leak when using quota groups
...gt;
> [ 5123.800178] btrfs-endio-wri: page allocation failure: order:0, mode:0x20
> [ 5123.800188] Pid: 27508, comm: btrfs-endio-wri Tainted: GF
> O 3.8.0-030800rc5-generic #201301251535
> [ 5123.800190] Call Trace:
> [ 5123.800204] [<ffffffff8113a656>] warn_alloc_failed+0xf6/0x150
> [ 5123.800208] [<ffffffff8113e28e>] __alloc_pages_nodemask+0x76e/0x9b0
> [ 5123.800213] [<ffffffff81182945>] ? new_slab+0x125/0x1a0
> [ 5123.800216] [<ffffffff81185c2c>] ? kmem_cache_alloc+0x11c/0x140
> [ 5123.800221] [<ffffffff8117a66a>] alloc_pages_curren...
2016 Jan 15
0
freshclam: page allocation failure: order:0, mode:0x2204010
...de_alloc+0x28/0xa0
[<ffffffff8142d958>] radix_tree_node_alloc+0x28/0xa0
[<ffffffff8142db1c>] __radix_tree_create+0x7c/0x200
[<ffffffff8142dce1>] radix_tree_insert+0x41/0xe0
[<ffffffff81459432>] add_dma_entry+0xa2/0x170
[<ffffffff81459843>] debug_dma_map_page+0x113/0x150
[<ffffffff816240a8>] usb_hcd_map_urb_for_dma+0x5f8/0x780
[<ffffffff81108d8d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff8162469d>] usb_hcd_submit_urb+0x1cd/0xac0
[<ffffffff810e82ca>] ? sched_clock_cpu+0x8a/0xb0
[<ffffffff816e6387>] ? led_trigger_blink_oneshot+0x77/0x...
2018 Aug 06
1
[PATCH v4 7/8] drm/nouveau: Fix deadlocks in nouveau_connector_detect()
.../0x850
> [ 861.490392] ? finish_wait+0x90/0x90
> [ 861.491068] __pm_runtime_resume+0x4e/0x90
> [ 861.491753] nouveau_display_hpd_work+0x22/0x60 [nouveau]
> [ 861.492416] process_one_work+0x231/0x620
> [ 861.493068] worker_thread+0x44/0x3a0
> [ 861.493722] kthread+0x12b/0x150
> [ 861.494342] ? wq_pool_ids_show+0x140/0x140
> [ 861.494991] ? kthread_create_worker_on_cpu+0x70/0x70
> [ 861.495648] ret_from_fork+0x3a/0x50
> [ 861.496304] INFO: task kworker/6:2:320 blocked for more than 120 seconds.
> [ 861.496968] Tainted: G O 4.18...
2013 Mar 05
3
nouveau lockdep splat
...[ 0.633664] [<ffffffff815eaf52>] ? mutex_lock_nested+0x292/0x330
> [ 0.633665] [<ffffffff815ead2e>] mutex_lock_nested+0x6e/0x330
> [ 0.633667] [<ffffffff8141bb53>] ? evo_wait+0x43/0xf0
> [ 0.633668] [<ffffffff815eb0b7>] ? __mutex_unlock_slowpath+0xc7/0x150
> [ 0.633669] [<ffffffff8141bb53>] evo_wait+0x43/0xf0
> [ 0.633671] [<ffffffff8141e569>] nv50_display_flip_next+0x749/0x7d0
> [ 0.633672] [<ffffffff8141bc37>] ? evo_kick+0x37/0x40
> [ 0.633674] [<ffffffff8141e7ee>] nv50_crtc_commit+0x10e/0x230
>...
2011 Jul 01
1
[79030.229547] motion: page allocation failure: order:6, mode:0xd4
...ents+0x12/0x20
[79030.229635] [<ffffffff810e2547>] __get_free_pages+0x17/0x80
[79030.229645] [<ffffffff813f0af6>] xen_swiotlb_alloc_coherent+0x56/0x140
[79030.229656] [<ffffffff814ea68e>] ? usb_alloc_urb+0x1e/0x50
[79030.229666] [<ffffffff814f00f5>] hcd_buffer_alloc+0x95/0x150
[79030.229676] [<ffffffff814e1806>] usb_alloc_coherent+0x26/0x30
[79030.229686] [<ffffffff8158d0a1>] em28xx_init_isoc+0x131/0x3a0
[79030.229696] [<ffffffff81586f1e>] buffer_prepare+0xbe/0x150
[79030.229706] [<ffffffff815a2ff7>] videobuf_qbuf+0x237/0x5b0
[79030.229716] [...
2018 Aug 01
0
[PATCH v4 7/8] drm/nouveau: Fix deadlocks in nouveau_connector_detect()
...861.489744] rpm_resume+0x19c/0x850
[ 861.490392] ? finish_wait+0x90/0x90
[ 861.491068] __pm_runtime_resume+0x4e/0x90
[ 861.491753] nouveau_display_hpd_work+0x22/0x60 [nouveau]
[ 861.492416] process_one_work+0x231/0x620
[ 861.493068] worker_thread+0x44/0x3a0
[ 861.493722] kthread+0x12b/0x150
[ 861.494342] ? wq_pool_ids_show+0x140/0x140
[ 861.494991] ? kthread_create_worker_on_cpu+0x70/0x70
[ 861.495648] ret_from_fork+0x3a/0x50
[ 861.496304] INFO: task kworker/6:2:320 blocked for more than 120 seconds.
[ 861.496968] Tainted: G O 4.18.0-rc6Lyude-Test+ #1
[ 8...
2014 Jul 01
3
[PATCH driver-core-linus] kernfs: kernfs_notify() must be useable from non-sleepable contexts
...00000441800 ffff880078fa1780 ffff88007d403c38 ffffffff8180caf2
Call Trace:
<IRQ> [<ffffffff81807b4c>] dump_stack+0x4d/0x66
[<ffffffff810d4f14>] __might_sleep+0x184/0x240
[<ffffffff8180caf2>] mutex_lock_nested+0x42/0x440
[<ffffffff812d76a0>] kernfs_notify+0x90/0x150
[<ffffffff8163377c>] bitmap_endwrite+0xcc/0x240
[<ffffffffa00de863>] close_write+0x93/0xb0 [raid1]
[<ffffffffa00df029>] r1_bio_write_done+0x29/0x50 [raid1]
[<ffffffffa00e0474>] raid1_end_write_request+0xe4/0x260 [raid1]
[<ffffffff813acb8b>] bio_endio+0x6b/0xa...