Displaying 20 results from an estimated 256 matches for "0x210".
Did you mean:
0x20
2017 Mar 02
2
[Bug 100035] New: nouveau runtime pm causes soft lockups and hangs during boot
...nvkm_client_resume+0xe/0x10 [nouveau]
[ 56.593350] nvif_client_resume+0x14/0x20 [nouveau]
[ 56.593365] nouveau_do_resume+0x4d/0x130 [nouveau]
[ 56.593379] nouveau_pmops_runtime_resume+0x72/0x150 [nouveau]
[ 56.593381] pci_pm_runtime_resume+0x7b/0xa0
[ 56.593382] __rpm_callback+0xc6/0x210
[ 56.593383] ? pci_restore_standard_config+0x40/0x40
[ 56.593384] rpm_callback+0x24/0x80
[ 56.593385] ? pci_restore_standard_config+0x40/0x40
[ 56.593385] rpm_resume+0x47d/0x680
[ 56.593400] ? i915_gem_timeline_init+0xe/0x10 [i915]
[ 56.593401] __pm_runtime_resume+0x4f/0x80
[ 56...
2013 Mar 05
3
nouveau lockdep splat
...[ 0.633711] [<ffffffff813f63ba>] nouveau_drm_probe+0x26a/0x2c0
> [ 0.633713] [<ffffffff812b4f15>] ? pci_match_device+0xd5/0xe0
> [ 0.633714] [<ffffffff812b5096>] pci_device_probe+0x136/0x150
> [ 0.633715] [<ffffffff81433566>] driver_probe_device+0x76/0x210
> [ 0.633716] [<ffffffff814337ab>] __driver_attach+0xab/0xb0
> [ 0.633717] [<ffffffff81433700>] ? driver_probe_device+0x210/0x210
> [ 0.633718] [<ffffffff8143175d>] bus_for_each_dev+0x5d/0xa0
> [ 0.633719] [<ffffffff81432fae>] driver_attach+0x1e/0...
2011 Jan 20
29
Runes of Magic ClientUpdate.exe crash???
Every time I download and try to install through crossover games, I get this same error, I made sure to use winetricks to download ie6, vcrun2005, and wininet. The error is "the program clientupdate.exe has encountered a serious problem and needs to close. We are sorry for the inconvienence". Does anyone have experience getting this to run through just wine, or crossover games? is
2020 Oct 23
0
kvm+nouveau induced lockdep gripe
...+0x33/0x40
[ 70.135842] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 70.135847]
-> #2 (&device->mutex){+.+.}-{3:3}:
[ 70.135857] __mutex_lock+0x90/0x9c0
[ 70.135902] nvkm_udevice_fini+0x23/0x70 [nouveau]
[ 70.135927] nvkm_object_fini+0xb8/0x210 [nouveau]
[ 70.135951] nvkm_object_fini+0x73/0x210 [nouveau]
[ 70.135974] nvkm_ioctl_del+0x7e/0xa0 [nouveau]
[ 70.135997] nvkm_ioctl+0x10a/0x240 [nouveau]
[ 70.136019] nvif_object_dtor+0x4a/0x60 [nouveau]
[ 70.136040] nvif_client_dtor+0xe/0x40 [nouveau]...
2020 Sep 09
0
nouveau: BUG: Invalid wait context
...(reservation_ww_class_acquire){+.+.}-{0:0}, at: drm_ioctl_kernel+0x91/0xe0 [drm]
[ 1143.133785] #2: ffff8d3e3dcef1a0 (reservation_ww_class_mutex){+.+.}-{4:4}, at: nouveau_gem_ioctl_pushbuf+0x63b/0x1cb0 [nouveau]
[ 1143.133834] #3: ffff8d3e9ec9ea10 (krc.lock){-.-.}-{2:2}, at: kvfree_call_rcu+0x65/0x210
[ 1143.133845] stack backtrace:
[ 1143.133850] CPU: 2 PID: 2015 Comm: X Kdump: loaded Tainted: G S E 5.9.0.g34d4ddd-preempt #2
[ 1143.133856] Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.20C 09/23/2013
[ 1143.133862] Call Trace:
[ 1143.133872] dump_stack+0x77/0x9b
[ 1143.13387...
2003 Apr 15
1
winbindd wbinfo -u - can't populate
...d (was 0xc0000022 before using administrator account)
rpcclient SERVER -U % -c querydispinfo
cmd = querydispinfo
index: 0x1 RID: 0x7d0 acb: 0x10 Account: andyj Name: Andrew Judge
Desc:
index: 0x2 RID: 0x7d2 acb: 0x10 Account: soledad Name: Soledad
Alvarez Desc:
index: 0x3 RID: 0x3e8 acb: 0x210 Account: root Name: root Desc:
index: 0x4 RID: 0x7e4 acb: 0x210 Account: rrico Name: Desc:
index: 0x5 RID: 0x7e8 acb: 0x210 Account: leman Name: Leman Porter
Desc:
Windows 2000 Server mixed
RH 9 standard RPMs
Samba server joined to the domain
winbind separator = +
winbind uid = 10000-...
2018 Jul 13
3
[PATCH 0/2] drm/nouveau: Fix connector memory corruption issues
This fixes some nasty issues I found in nouveau that were being caused
looping through connectors using racy legacy methods, along with some
caused by making incorrect assumptions about the drm_connector structs
in nouveau's connector list. Most of these memory corruption issues
could be reproduced by using an MST hub with nouveau.
Cc: Karol Herbst <karolherbst at gmail.com>
Cc: stable
2018 Jul 13
0
[PATCH 2/2] drm/nouveau: Avoid looping through fake MST connectors
...g+0x70/0x70
[ 201.039275] __rpm_callback+0x1f2/0x5d0
[ 201.039279] ? rpm_resume+0x560/0x18a0
[ 201.039283] ? pci_restore_standard_config+0x70/0x70
[ 201.039287] ? pci_restore_standard_config+0x70/0x70
[ 201.039291] ? pci_restore_standard_config+0x70/0x70
[ 201.039296] rpm_callback+0x175/0x210
[ 201.039300] ? pci_restore_standard_config+0x70/0x70
[ 201.039305] rpm_resume+0xcc3/0x18a0
[ 201.039312] ? rpm_callback+0x210/0x210
[ 201.039317] ? __pm_runtime_resume+0x9e/0x100
[ 201.039322] ? kasan_check_write+0x14/0x20
[ 201.039326] ? do_raw_spin_lock+0xc2/0x1c0
[ 201.039333] __p...
2020 Oct 24
1
kvm+nouveau induced lockdep gripe
...entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 70.135847]
> -> #2 (&device->mutex){+.+.}-{3:3}:
> [ 70.135857] __mutex_lock+0x90/0x9c0
> [ 70.135902] nvkm_udevice_fini+0x23/0x70 [nouveau]
> [ 70.135927] nvkm_object_fini+0xb8/0x210 [nouveau]
> [ 70.135951] nvkm_object_fini+0x73/0x210 [nouveau]
> [ 70.135974] nvkm_ioctl_del+0x7e/0xa0 [nouveau]
> [ 70.135997] nvkm_ioctl+0x10a/0x240 [nouveau]
> [ 70.136019] nvif_object_dtor+0x4a/0x60 [nouveau]
> [ 70.136040] nvif_client...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...omm: dmcrypt_write Not tainted 4.1.8-gentoo #1
>>> Hardware name: Red Hat KVM, BIOS seabios-1.7.5-8.el7 04/01/2014
>>> task: ffff88061fb70000 ti: ffff88061ff30000 task.ti: ffff88061ff30000
>>> RIP: 0010:[<ffffffffb4557b30>] [<ffffffffb4557b30>] virtio_queue_rq+0x210/0x2b0
>>> RSP: 0018:ffff88061ff33ba8 EFLAGS: 00010202
>>> RAX: 00000000000000b1 RBX: ffff88061fb2fc00 RCX: ffff88061ff33c30
>>> RDX: 0000000000000008 RSI: ffff88061ff33c50 RDI: ffff88061fb2fc00
>>> RBP: ffff88061ff33bf8 R08: ffff88061eef3540 R09: ffff88061ff33c30...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...omm: dmcrypt_write Not tainted 4.1.8-gentoo #1
>>> Hardware name: Red Hat KVM, BIOS seabios-1.7.5-8.el7 04/01/2014
>>> task: ffff88061fb70000 ti: ffff88061ff30000 task.ti: ffff88061ff30000
>>> RIP: 0010:[<ffffffffb4557b30>] [<ffffffffb4557b30>] virtio_queue_rq+0x210/0x2b0
>>> RSP: 0018:ffff88061ff33ba8 EFLAGS: 00010202
>>> RAX: 00000000000000b1 RBX: ffff88061fb2fc00 RCX: ffff88061ff33c30
>>> RDX: 0000000000000008 RSI: ffff88061ff33c50 RDI: ffff88061fb2fc00
>>> RBP: ffff88061ff33bf8 R08: ffff88061eef3540 R09: ffff88061ff33c30...
2010 Aug 04
1
A reproducible crush of mounting a subvolume
...ta_acpi ata_generic radeon ttm
drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded:
scsi_wait_scan]
Pid: 2465, comm: mount Not tainted 2.6.35-0.47.rc5.git2.fc14.x86_64 #1
0V4W66/OptiPlex 780
RIP: 0010:[<ffffffff8113a58c>] [<ffffffff8113a58c>]
shrink_dcache_for_umount_subtree+0x133/0x210
RSP: 0018:ffff8800a9d3dc88 EFLAGS: 00010296
RAX: 0000000000000057 RBX: ffff8800aac528f8 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff81eee894 RDI: 0000000000000246
RBP: ffff8800a9d3dcb8 R08: 000000000000ba70 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff880...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...omm: dmcrypt_write Not tainted 4.1.8-gentoo #1
>>> Hardware name: Red Hat KVM, BIOS seabios-1.7.5-8.el7 04/01/2014
>>> task: ffff88061fb70000 ti: ffff88061ff30000 task.ti: ffff88061ff30000
>>> RIP: 0010:[<ffffffffb4557b30>] [<ffffffffb4557b30>] virtio_queue_rq+0x210/0x2b0
>>> RSP: 0018:ffff88061ff33ba8 EFLAGS: 00010202
>>> RAX: 00000000000000b1 RBX: ffff88061fb2fc00 RCX: ffff88061ff33c30
>>> RDX: 0000000000000008 RSI: ffff88061ff33c50 RDI: ffff88061fb2fc00
>>> RBP: ffff88061ff33bf8 R08: ffff88061eef3540 R09: ffff88061ff33c30...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...omm: dmcrypt_write Not tainted 4.1.8-gentoo #1
>>> Hardware name: Red Hat KVM, BIOS seabios-1.7.5-8.el7 04/01/2014
>>> task: ffff88061fb70000 ti: ffff88061ff30000 task.ti: ffff88061ff30000
>>> RIP: 0010:[<ffffffffb4557b30>] [<ffffffffb4557b30>] virtio_queue_rq+0x210/0x2b0
>>> RSP: 0018:ffff88061ff33ba8 EFLAGS: 00010202
>>> RAX: 00000000000000b1 RBX: ffff88061fb2fc00 RCX: ffff88061ff33c30
>>> RDX: 0000000000000008 RSI: ffff88061ff33c50 RDI: ffff88061fb2fc00
>>> RBP: ffff88061ff33bf8 R08: ffff88061eef3540 R09: ffff88061ff33c30...
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...9:50:59 Server21 kernel: [ 1199.751422] [<ffffffffa0635a5b>] ocfs2_fill_super+0x154b/0x2540 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751426] [<ffffffff81316059>] ? vsnprintf+0x219/0x600
Feb 27 09:50:59 Server21 kernel: [ 1199.751433] [<ffffffff8117aa46>] mount_bdev+0x1c6/0x210
Feb 27 09:50:59 Server21 kernel: [ 1199.751460] [<ffffffffa0634510>] ? ocfs2_initialize_super.isra.208+0x1440/0x1440 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751487] [<ffffffffa0624615>] ocfs2_mount+0x15/0x20 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751491] [<fffffff...
2013 Feb 27
2
ocfs2 bug reports, any advices? thanks
...9:50:59 Server21 kernel: [ 1199.751422] [<ffffffffa0635a5b>] ocfs2_fill_super+0x154b/0x2540 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751426] [<ffffffff81316059>] ? vsnprintf+0x219/0x600
Feb 27 09:50:59 Server21 kernel: [ 1199.751433] [<ffffffff8117aa46>] mount_bdev+0x1c6/0x210
Feb 27 09:50:59 Server21 kernel: [ 1199.751460] [<ffffffffa0634510>] ? ocfs2_initialize_super.isra.208+0x1440/0x1440 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751487] [<ffffffffa0624615>] ocfs2_mount+0x15/0x20 [ocfs2]
Feb 27 09:50:59 Server21 kernel: [ 1199.751491] [<fffffff...
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
...<ffffffffa053fa9c>] ? ocfs2_inode_lock_full_nested+0x52c/0xa90 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034939] [<ffffffff81647ae2>] ? balance_dirty_pages.isra.17+0x457/0x4ba
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034959] [<ffffffffa052bf26>] ocfs2_write_begin+0xf6/0x210 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034968] [<ffffffff8111752a>] generic_perform_write+0xca/0x210
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034991] [<ffffffffa053d9b9>] ? ocfs2_inode_unlock+0xb9/0x130 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034998] [<fff...
2012 Jun 21
1
echo 0 > /proc/sys/kernel/hung_task_timeout_secs and others error, Part II
...<ffffffffa053fa9c>] ? ocfs2_inode_lock_full_nested+0x52c/0xa90 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034939] [<ffffffff81647ae2>] ? balance_dirty_pages.isra.17+0x457/0x4ba
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034959] [<ffffffffa052bf26>] ocfs2_write_begin+0xf6/0x210 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034968] [<ffffffff8111752a>] generic_perform_write+0xca/0x210
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034991] [<ffffffffa053d9b9>] ? ocfs2_inode_unlock+0xb9/0x130 [ocfs2]
Jun 20 20:42:01 H3CRDS11-RD kernel: [17509.034998] [<fff...
2018 Jul 13
2
[PATCH v2 0/2] drm/nouveau: Fix connector memory corruption issues
This fixes some nasty issues I found in nouveau that were being caused
looping through connectors using racy legacy methods, along with some
caused by making incorrect assumptions about the drm_connector structs
in nouveau's connector list. Most of these memory corruption issues
could be reproduced by using an MST hub with nouveau.
Next version of