search for: 0x660

Displaying 20 results from an estimated 45 matches for "0x660".

Did you mean: 0x60
2004 May 04
2
IE6 problems maybe due to dcom98 installing issues
...> "C:\\Windows\\System\\comcat.dll" fixme:setupapi:do_file_copyW Notify that target version is greater.. err:setupapi:SetupDefaultQueueCallbackA copy error 0 "X:\\IXP004.TMP\\oleaut32.dll" -> "C:\\Windows\\System\\oleaut32.dll" fixme:setupapi:vcpUICallbackProc16 (0x660, 0705, 0000, 00000000, 40361dec) - semi-stub fixme:setupapi:vcpUICallbackProc16 (0x660, 070f, 0000, 00000000, 40361dec) - semi-stub fixme:setupapi:vcpUICallbackProc16 (0x660, 0710, 0000, 00000000, 40361dec) - semi-stub fixme:setupapi:vcpUICallbackProc16 (0x660, 070b, 0000, 00000000, 40361dec) - sem...
2019 Jul 01
1
[PATCH] drm/nouveau: fix memory leak in nouveau_conn_reset()
..._trace+0x195/0x2c0 [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau] [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau] [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau] [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau] [<00000000c924699b>] nouveau_drm_device_init+0x182/0x7f0 [nouveau] [<00000000cc029436>] nouveau_drm_probe+0x20c/0x2c0 [nouveau] [<000000007e961c3e>] local_pci_probe+0x47/0xa0 [<00000000da14d569>] work_for_cpu_fn+0x1a/0x30 [<0000000028da4805&g...
2019 Jul 26
0
[PATCH AUTOSEL 5.2 83/85] drm/nouveau: fix memory leak in nouveau_conn_reset()
..._trace+0x195/0x2c0 [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau] [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau] [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau] [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau] [<00000000c924699b>] nouveau_drm_device_init+0x182/0x7f0 [nouveau] [<00000000cc029436>] nouveau_drm_probe+0x20c/0x2c0 [nouveau] [<000000007e961c3e>] local_pci_probe+0x47/0xa0 [<00000000da14d569>] work_for_cpu_fn+0x1a/0x30 [<0000000028da4805&g...
2019 Jul 26
0
[PATCH AUTOSEL 4.19 47/47] drm/nouveau: fix memory leak in nouveau_conn_reset()
..._trace+0x195/0x2c0 [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau] [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau] [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau] [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau] [<00000000c924699b>] nouveau_drm_device_init+0x182/0x7f0 [nouveau] [<00000000cc029436>] nouveau_drm_probe+0x20c/0x2c0 [nouveau] [<000000007e961c3e>] local_pci_probe+0x47/0xa0 [<00000000da14d569>] work_for_cpu_fn+0x1a/0x30 [<0000000028da4805&g...
2019 Jul 26
0
[PATCH AUTOSEL 4.14 37/37] drm/nouveau: fix memory leak in nouveau_conn_reset()
..._trace+0x195/0x2c0 [<00000000a122baed>] nouveau_conn_reset+0x25/0xc0 [nouveau] [<000000004fd189a2>] nouveau_connector_create+0x3a7/0x610 [nouveau] [<00000000c73343a8>] nv50_display_create+0x343/0x980 [nouveau] [<000000002e2b03c3>] nouveau_display_create+0x51f/0x660 [nouveau] [<00000000c924699b>] nouveau_drm_device_init+0x182/0x7f0 [nouveau] [<00000000cc029436>] nouveau_drm_probe+0x20c/0x2c0 [nouveau] [<000000007e961c3e>] local_pci_probe+0x47/0xa0 [<00000000da14d569>] work_for_cpu_fn+0x1a/0x30 [<0000000028da4805&g...
2009 Nov 08
9
2.6.31 xenified kernel - not ready for production
Hi, I just want to know if somebody use 2.6.31.4 xenified kernel (aka OpenSUSE) in production? We have been testing it on new Nehalem Xeon server for few weeks w/o any problem. But as soon we tried it on production machine - after several production domUs started - hard OS failure. We had to switch back to 2.6.18.8 - xen stock kernel. Peter _______________________________________________
2017 Dec 13
2
[PATCHv2] virtio_mmio: fix devm cleanup
...add_driver+0x26c/0x5b8 > > [ 3.752248] driver_register+0x16c/0x398 > > [ 3.757211] __platform_driver_register+0xd8/0x128 > > [ 3.770860] virtio_mmio_init+0x1c/0x24 > > [ 3.782671] do_one_initcall+0xe0/0x398 > > [ 3.791890] kernel_init_freeable+0x594/0x660 > > [ 3.798514] kernel_init+0x18/0x190 > > [ 3.810220] ret_from_fork+0x10/0x18 > > > > To fix this, we can simply rip out the explicit cleanup that the devm > > infrastructure will do for us when our probe function returns an error > > code, or when our...
2017 Dec 13
2
[PATCHv2] virtio_mmio: fix devm cleanup
...add_driver+0x26c/0x5b8 > > [ 3.752248] driver_register+0x16c/0x398 > > [ 3.757211] __platform_driver_register+0xd8/0x128 > > [ 3.770860] virtio_mmio_init+0x1c/0x24 > > [ 3.782671] do_one_initcall+0xe0/0x398 > > [ 3.791890] kernel_init_freeable+0x594/0x660 > > [ 3.798514] kernel_init+0x18/0x190 > > [ 3.810220] ret_from_fork+0x10/0x18 > > > > To fix this, we can simply rip out the explicit cleanup that the devm > > infrastructure will do for us when our probe function returns an error > > code, or when our...
2014 Oct 13
2
v3.17, i915 vs nouveau: possible recursive locking detected
...drm_gem_object_free+0x27/0x30 [drm] [<ffffffffa001cd34>] drm_gem_object_handle_unreference_unlocked+0xe4/0x120 [drm] [<ffffffffa001ce2a>] drm_gem_handle_delete+0xba/0x110 [drm] [<ffffffffa001d495>] drm_gem_close_ioctl+0x25/0x30 [drm] [<ffffffffa001df0c>] drm_ioctl+0x1ec/0x660 [drm] [<ffffffff8148e4b2>] ? __pm_runtime_resume+0x32/0x60 [<ffffffff817102fd>] ? _raw_spin_unlock_irqrestore+0x5d/0x70 [<ffffffff810df15d>] ? trace_hardirqs_on_caller+0xfd/0x1c0 [<ffffffff810df22d>] ? trace_hardirqs_on+0xd/0x10 [<ffffffff817102e2>] ? _raw_spin_un...
2017 Dec 12
2
[PATCH] virtio_mmio: fix devm cleanup
...633] driver_attach+0x48/0x78 [ 3.740249] bus_add_driver+0x26c/0x5b8 [ 3.752248] driver_register+0x16c/0x398 [ 3.757211] __platform_driver_register+0xd8/0x128 [ 3.770860] virtio_mmio_init+0x1c/0x24 [ 3.782671] do_one_initcall+0xe0/0x398 [ 3.791890] kernel_init_freeable+0x594/0x660 [ 3.798514] kernel_init+0x18/0x190 [ 3.810220] ret_from_fork+0x10/0x18 To fix this, we can simply rip out the explicit cleanup that the devm infrastructure will do for us when our probe function returns an error code. We only need to ensure that we call put_device() if a call to register_v...
2017 Dec 12
2
[PATCH] virtio_mmio: fix devm cleanup
...633] driver_attach+0x48/0x78 [ 3.740249] bus_add_driver+0x26c/0x5b8 [ 3.752248] driver_register+0x16c/0x398 [ 3.757211] __platform_driver_register+0xd8/0x128 [ 3.770860] virtio_mmio_init+0x1c/0x24 [ 3.782671] do_one_initcall+0xe0/0x398 [ 3.791890] kernel_init_freeable+0x594/0x660 [ 3.798514] kernel_init+0x18/0x190 [ 3.810220] ret_from_fork+0x10/0x18 To fix this, we can simply rip out the explicit cleanup that the devm infrastructure will do for us when our probe function returns an error code. We only need to ensure that we call put_device() if a call to register_v...
2017 Dec 12
4
[PATCHv2] virtio_mmio: fix devm cleanup
...633] driver_attach+0x48/0x78 [ 3.740249] bus_add_driver+0x26c/0x5b8 [ 3.752248] driver_register+0x16c/0x398 [ 3.757211] __platform_driver_register+0xd8/0x128 [ 3.770860] virtio_mmio_init+0x1c/0x24 [ 3.782671] do_one_initcall+0xe0/0x398 [ 3.791890] kernel_init_freeable+0x594/0x660 [ 3.798514] kernel_init+0x18/0x190 [ 3.810220] ret_from_fork+0x10/0x18 To fix this, we can simply rip out the explicit cleanup that the devm infrastructure will do for us when our probe function returns an error code, or when our remove function returns. We only need to ensure that we cal...
2017 Dec 12
4
[PATCHv2] virtio_mmio: fix devm cleanup
...633] driver_attach+0x48/0x78 [ 3.740249] bus_add_driver+0x26c/0x5b8 [ 3.752248] driver_register+0x16c/0x398 [ 3.757211] __platform_driver_register+0xd8/0x128 [ 3.770860] virtio_mmio_init+0x1c/0x24 [ 3.782671] do_one_initcall+0xe0/0x398 [ 3.791890] kernel_init_freeable+0x594/0x660 [ 3.798514] kernel_init+0x18/0x190 [ 3.810220] ret_from_fork+0x10/0x18 To fix this, we can simply rip out the explicit cleanup that the devm infrastructure will do for us when our probe function returns an error code, or when our remove function returns. We only need to ensure that we cal...
2019 Aug 01
1
[PATCH] drm/nouveau: Only release VCPI slots on mode changes
...elper] drm_mode_setcrtc+0x194/0x6a0 [drm] ? vprintk_emit+0x16a/0x230 ? drm_ioctl+0x163/0x390 [drm] ? drm_mode_getcrtc+0x180/0x180 [drm] drm_ioctl_kernel+0xaa/0xf0 [drm] drm_ioctl+0x208/0x390 [drm] ? drm_mode_getcrtc+0x180/0x180 [drm] nouveau_drm_ioctl+0x63/0xb0 [nouveau] do_vfs_ioctl+0x405/0x660 ? recalc_sigpending+0x17/0x50 ? _copy_from_user+0x37/0x60 ksys_ioctl+0x5e/0x90 ? exit_to_usermode_loop+0x92/0xe0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x59/0x190 entry_SYSCALL_64_after_hwframe+0x44/0xa9 WARNING: CPU: 0 PID: 1484 at drivers/gpu/drm/drm_dp_mst_topology.c:3336 drm_dp_atomic_r...
2019 Jul 28
1
[Bug 111242] New: Device driver tries to sync DMA memory it has not allocated
...17:16:25 localhost.localdomain kernel: DMA-API: nouveau 0000:01:00.0: device driver tries to sync DMA memory it has not allocated [device address=0x00000001d1e12000] [size=4096 bytes] jul 28 17:16:25 localhost.localdomain kernel: WARNING: CPU: 6 PID: 1166 at kernel/dma/debug.c:1147 check_sync+0x139/0x660 jul 28 17:16:25 localhost.localdomain kernel: Modules linked in: nf_conntrack_netbios_ns nf_conntrack_broadcast xt_CT ip6t_REJECT nf_reject_ipv6 ip6t_rpfilter ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat iptable_ma...
2019 May 23
4
[Bug 110748] New: [NVC1] [optimus] fifo: read fault at 0000000000 engine 00 [PGRAPH] client 00 reason 02 [PAGE_NOT_PRESENT]
...x2f/0x50 [nouveau] May 21 14:41:25 kernel: nvkm_ioctl+0xde/0x180 [nouveau] May 21 14:41:25 kernel: ? nvkm_ioctl+0x71/0x180 [nouveau] May 21 14:41:25 kernel: usif_ioctl+0x33d/0x700 [nouveau] May 21 14:41:25 kernel: nouveau_drm_ioctl+0xa8/0xb0 [nouveau] May 21 14:41:25 kernel: do_vfs_ioctl+0x405/0x660 May 21 14:41:25 kernel: ksys_ioctl+0x5e/0x90 May 21 14:41:25 kernel: __x64_sys_ioctl+0x16/0x20 May 21 14:41:25 kernel: do_syscall_64+0x5b/0x170 May 21 14:41:25 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 May 21 14:41:25 kernel: RIP: 0033:0x7f494f39f03b May 21 14:41:25 kernel: Code: 0f 1e f...
2010 Oct 29
2
[LLVMdev] "multiple definition of .. " in clang 2.8
...multiple definition of `getchar_unlocked' av.o:av.c:(.text+0x610): first defined here Hostname.o: In function `putchar': Hostname.c:(.text+0x640): multiple definition of `putchar' av.o:av.c:(.text+0x640): first defined here Hostname.o: In function `fputc_unlocked': Hostname.c:(.text+0x660): multiple definition of `fputc_unlocked' av.o:av.c:(.text+0x660): first defined here Hostname.o: In function `putc_unlocked': Hostname.c:(.text+0x690): multiple definition of `putc_unlocked' av.o:av.c:(.text+0x690): first defined here Hostname.o: In function `putchar_unlocked': Hos...
2017 Dec 12
0
[PATCHv2] virtio_mmio: fix devm cleanup
...> [ 3.740249] bus_add_driver+0x26c/0x5b8 > [ 3.752248] driver_register+0x16c/0x398 > [ 3.757211] __platform_driver_register+0xd8/0x128 > [ 3.770860] virtio_mmio_init+0x1c/0x24 > [ 3.782671] do_one_initcall+0xe0/0x398 > [ 3.791890] kernel_init_freeable+0x594/0x660 > [ 3.798514] kernel_init+0x18/0x190 > [ 3.810220] ret_from_fork+0x10/0x18 > > To fix this, we can simply rip out the explicit cleanup that the devm > infrastructure will do for us when our probe function returns an error > code, or when our remove function returns. >...
2014 Oct 16
0
[Intel-gfx] v3.17, i915 vs nouveau: possible recursive locking detected
...+0x27/0x30 [drm] > [<ffffffffa001cd34>] drm_gem_object_handle_unreference_unlocked+0xe4/0x120 [drm] > [<ffffffffa001ce2a>] drm_gem_handle_delete+0xba/0x110 [drm] > [<ffffffffa001d495>] drm_gem_close_ioctl+0x25/0x30 [drm] > [<ffffffffa001df0c>] drm_ioctl+0x1ec/0x660 [drm] > [<ffffffff8148e4b2>] ? __pm_runtime_resume+0x32/0x60 > [<ffffffff817102fd>] ? _raw_spin_unlock_irqrestore+0x5d/0x70 > [<ffffffff810df15d>] ? trace_hardirqs_on_caller+0xfd/0x1c0 > [<ffffffff810df22d>] ? trace_hardirqs_on+0xd/0x10 > [<ffffffff817...
2017 Dec 14
0
[PATCHv2] virtio_mmio: fix devm cleanup
...t; > > [ 3.752248] driver_register+0x16c/0x398 > > > [ 3.757211] __platform_driver_register+0xd8/0x128 > > > [ 3.770860] virtio_mmio_init+0x1c/0x24 > > > [ 3.782671] do_one_initcall+0xe0/0x398 > > > [ 3.791890] kernel_init_freeable+0x594/0x660 > > > [ 3.798514] kernel_init+0x18/0x190 > > > [ 3.810220] ret_from_fork+0x10/0x18 > > > > > > To fix this, we can simply rip out the explicit cleanup that the devm > > > infrastructure will do for us when our probe function returns an error &gt...