search for: 0x50

Displaying 20 results from an estimated 1155 matches for "0x50".

Did you mean: 0x20
2014 Sep 04
1
Kernel errors after updating
...00000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 ffff880411cb2aa0 ffff880411cb3058 ffff880411cb9fd8 000000000000fbc8 ffff880411cb3058 Call Trace: [<ffffffff8152a36e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff810d24d0>] ? do_rebuild_sched_domains+0x0/0x50 [<ffffffff8152a20b>] mutex_lock+0x2b/0x50 [<ffffffff810c97b5>] cgroup_lock+0x15/0x20 [<ffffffff810d24e8>] do_rebuild_sched_domains+0x18/0x50 [<ffffffff81094a20>] worker_thread+0x170/0x2a0 [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40 [<ffffffff81094...
2006 Sep 21
12
Hard drive errors
One of my CentOS boxes has started giving me errors. The box is CentOS-4.4 (i386) fully updated. It has a pair of SATA drives in a software raid 1 configuration. The errors I see are: ata1: command 0xca timeout, stat 0x50 host_stat 0x24 ata1: status=0x50 { DriveReady SeekComplete } Info fld=0x1e22b8, Current sda: sense key No Sense ata2: command 0xca timeout, stat 0x50 host_stat 0x24 ata2: status=0x50 { DriveReady SeekComplete } Info fld=0x1e2598, Current sdb: sense key No Sense If it was just c...
2016 Apr 19
0
Bug#820862: AW: Bug#820862: Acknowledgement (xen-hypervisor-4.4-amd64: Xen VM on Jessie freezes often with INFO: task jbd2/xvda2-8:111 blocked for more than 120 seconds)
...000246 0000000000012f00 ffff8800047ebfd8 [ 1920.052171] 0000000000012f00 ffff880004986a20 ffff8800ff3137b0 ffff8800ff80c260 [ 1920.052179] 0000000000000002 ffffffff811d7620 ffff8800047ebc80 ffff880002a5c7c0 [ 1920.052187] Call Trace: [ 1920.052199] [<ffffffff811d7620>] ? generic_block_bmap+0x50/0x50 [ 1920.052208] [<ffffffff815114a9>] ? io_schedule+0x99/0x120 [ 1920.052214] [<ffffffff811d762a>] ? sleep_on_buffer+0xa/0x10 [ 1920.052220] [<ffffffff8151182c>] ? __wait_on_bit+0x5c/0x90 [ 1920.052226] [<ffffffff811d7620>] ? generic_block_bmap+0x50/0x50 [ 1920.052232...
2010 May 14
1
Kernel module fails to initialize on AMD751 based system with NV34
...kernel: [ 3.741258] [drm] nouveau 0000:01:05.0: nouveau_channel_free: freeing fifo 0 May 14 19:17:28 max-desktop kernel: [ 3.741271] ------------[ cut here ]------------ May 14 19:17:28 max-desktop kernel: [ 3.741303] WARNING: at /build/buildd/linux-2.6.32/lib/iomap.c:43 bad_io_access+0x45/0x50() May 14 19:17:28 max-desktop kernel: [ 3.741314] Hardware name: MS-6195 May 14 19:17:28 max-desktop kernel: [ 3.741329] Modules linked in: nouveau(+) ttm drm_kms_helper 8139too amd_k7_agp drm 8139cp usbhid hid floppy i2c_algo_bit mii agpgart pata_amd May 14 19:17:28 max-desktop kernel: [...
2017 Jun 05
0
BUG: KASAN: use-after-free in free_old_xmit_skbs
...k_fair+0xc09/0x2ec0 > > [ 310.058457] dev_queue_xmit+0x10/0x20 > > [ 310.059298] ip_finish_output2+0xacf/0x12a0 > > [ 310.060160] ? dequeue_entity+0x1520/0x1520 > > [ 310.063410] ? ip_fragment.constprop.47+0x220/0x220 > > [ 310.065078] ? ring_buffer_set_clock+0x50/0x50 > > [ 310.066677] ? __switch_to+0x685/0xda0 > > [ 310.068166] ? load_balance+0x38f0/0x38f0 > > [ 310.069544] ? compat_start_thread+0x80/0x80 > > [ 310.070989] ? trace_find_cmdline+0x60/0x60 > > [ 310.072402] ? rt_cpu_seq_show+0x2d0/0x2d0 > > [ 310...
2017 Jun 05
0
BUG: KASAN: use-after-free in free_old_xmit_skbs
...k_fair+0xc09/0x2ec0 > > [ 310.058457] dev_queue_xmit+0x10/0x20 > > [ 310.059298] ip_finish_output2+0xacf/0x12a0 > > [ 310.060160] ? dequeue_entity+0x1520/0x1520 > > [ 310.063410] ? ip_fragment.constprop.47+0x220/0x220 > > [ 310.065078] ? ring_buffer_set_clock+0x50/0x50 > > [ 310.066677] ? __switch_to+0x685/0xda0 > > [ 310.068166] ? load_balance+0x38f0/0x38f0 > > [ 310.069544] ? compat_start_thread+0x80/0x80 > > [ 310.070989] ? trace_find_cmdline+0x60/0x60 > > [ 310.072402] ? rt_cpu_seq_show+0x2d0/0x2d0 > > [ 310...
2020 Oct 23
0
kvm+nouveau induced lockdep gripe
...dency detected [ 70.135211] 5.9.0.gf989335-master #1 Tainted: G E [ 70.135216] ------------------------------------------------------ [ 70.135220] libvirtd/1838 is trying to acquire lock: [ 70.135225] ffff983590c2d5a8 (&mm->mmap_lock#2){++++}-{3:3}, at: mpol_rebind_mm+0x1e/0x50 [ 70.135239] but task is already holding lock: [ 70.135244] ffffffff8a585410 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x38/0x390 [ 70.135256] which lock already depends on the new lock. [ 70.135261] the existing dependency chain (in re...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...04 [354073.738334] Call Trace: [354073.738340] __schedule+0x2ba/0x650 [354073.738342] schedule+0x2d/0x90 [354073.738343] schedule_preempt_disabled+0xe/0x10 [354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750 [354073.738346] __ww_mutex_lock_slowpath+0x16/0x20 [354073.738347] ww_mutex_lock+0x34/0x50 [354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm] [354073.738356] qxl_release_reserve_list+0x67/0x150 [qxl] [354073.738358] ? qxl_bo_pin+0xaa/0x190 [qxl] [354073.738359] qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl] [354073.738367] drm_atomic_helper_commit_planes+0xb9/0x220 [drm_kms_helper...
2019 Aug 06
2
Xorg indefinitely hangs in kernelspace
...04 [354073.738334] Call Trace: [354073.738340] __schedule+0x2ba/0x650 [354073.738342] schedule+0x2d/0x90 [354073.738343] schedule_preempt_disabled+0xe/0x10 [354073.738345] __ww_mutex_lock.isra.11+0x3e0/0x750 [354073.738346] __ww_mutex_lock_slowpath+0x16/0x20 [354073.738347] ww_mutex_lock+0x34/0x50 [354073.738352] ttm_eu_reserve_buffers+0x1f9/0x2e0 [ttm] [354073.738356] qxl_release_reserve_list+0x67/0x150 [qxl] [354073.738358] ? qxl_bo_pin+0xaa/0x190 [qxl] [354073.738359] qxl_cursor_atomic_update+0x1b0/0x2e0 [qxl] [354073.738367] drm_atomic_helper_commit_planes+0xb9/0x220 [drm_kms_helper...
2013 Aug 09
1
voltage table 0x50
Hi there, while playing around with my not really working optimus notebook, i did notice a waring in my logs nouveau W[ DRM] voltage table 0x50 unknown Searching for this, i did not find any real information about this, just a bug report which happens to contain the same line. Has anyone on this list more info and can point me there, or is some interest in fixing this? Thanks, Tobias Klausmann
2010 Feb 25
3
[PATCH 1/3] drm/nv50: Implement ctxprog/state generation.
...ODULE_FIRMWARE("nouveau/nvaa.ctxprog"); -MODULE_FIRMWARE("nouveau/nvaa.ctxvals"); -MODULE_FIRMWARE("nouveau/nvac.ctxprog"); -MODULE_FIRMWARE("nouveau/nvac.ctxvals"); +#include "nouveau_grctx.h" #define IS_G80 ((dev_priv->chipset & 0xf0) == 0x50) @@ -111,9 +88,34 @@ nv50_graph_init_ctxctl(struct drm_device *dev) NV_DEBUG(dev, "\n"); - nouveau_grctx_prog_load(dev); - if (!dev_priv->engine.graph.ctxprog) - dev_priv->engine.graph.accel_blocked = true; + if (nouveau_ctxfw) { + nouveau_grctx_prog_load(dev); + dev_priv...
2016 Mar 16
2
[PATCH 0/2] Fix some VID parsing in the voltage table version 0x50
On a very few GPUs with the voltage table version 0x50 we have to read out the VIDs out of the entries of the table, where all the other gpus are either PWM based or get a base and a step voltage out of the table header. Currently nouveau tried to autodetect this and actually doesn't parse the entries. This Series adds two things: 1. It parses th...
2015 Dec 01
0
[RFC PATCH 1/5] bios/volt: handle voltage table version 0x50 with 0ed header
...fd2776b 100644 --- a/drm/nouveau/nvkm/subdev/bios/volt.c +++ b/drm/nouveau/nvkm/subdev/bios/volt.c @@ -142,7 +142,10 @@ nvbios_volt_entry_parse(struct nvkm_bios *bios, int idx, u8 *ver, u8 *len, info->vid = nvbios_rd08(bios, volt + 0x01) >> 2; break; case 0x40: + break; case 0x50: + info->voltage = nvbios_rd32(bios, volt) & 0x001fffff; + info->vid = idx; break; } return volt; -- 2.6.3
2015 Dec 02
0
[PATCH v2 1/7] bios/volt: handle voltage table version 0x50 with 0ed header
...fd2776b 100644 --- a/drm/nouveau/nvkm/subdev/bios/volt.c +++ b/drm/nouveau/nvkm/subdev/bios/volt.c @@ -142,7 +142,10 @@ nvbios_volt_entry_parse(struct nvkm_bios *bios, int idx, u8 *ver, u8 *len, info->vid = nvbios_rd08(bios, volt + 0x01) >> 2; break; case 0x40: + break; case 0x50: + info->voltage = nvbios_rd32(bios, volt) & 0x001fffff; + info->vid = idx; break; } return volt; -- 2.6.3
2016 Mar 17
0
[PATCH 01/19] bios/volt: handle voltage table version 0x50 with 0ed header
...81a47b2 100644 --- a/drm/nouveau/nvkm/subdev/bios/volt.c +++ b/drm/nouveau/nvkm/subdev/bios/volt.c @@ -142,7 +142,10 @@ nvbios_volt_entry_parse(struct nvkm_bios *bios, int idx, u8 *ver, u8 *len, info->vid = nvbios_rd08(bios, volt + 0x01) >> 2; break; case 0x40: + break; case 0x50: + info->voltage = nvbios_rd32(bios, volt) & 0x001fffff; + info->vid = (nvbios_rd32(bios, volt) >> 23) & 0xff; break; } return volt; -- 2.7.3
2016 Apr 07
0
[PATCH v3 01/29] bios/volt: handle voltage table version 0x50 with 0ed header
...81a47b2 100644 --- a/drm/nouveau/nvkm/subdev/bios/volt.c +++ b/drm/nouveau/nvkm/subdev/bios/volt.c @@ -142,7 +142,10 @@ nvbios_volt_entry_parse(struct nvkm_bios *bios, int idx, u8 *ver, u8 *len, info->vid = nvbios_rd08(bios, volt + 0x01) >> 2; break; case 0x40: + break; case 0x50: + info->voltage = nvbios_rd32(bios, volt) & 0x001fffff; + info->vid = (nvbios_rd32(bios, volt) >> 23) & 0xff; break; } return volt; -- 2.8.1
2016 Apr 18
0
[PATCH v4 01/37] bios/volt: handle voltage table version 0x50 with 0ed header
...81a47b2 100644 --- a/drm/nouveau/nvkm/subdev/bios/volt.c +++ b/drm/nouveau/nvkm/subdev/bios/volt.c @@ -142,7 +142,10 @@ nvbios_volt_entry_parse(struct nvkm_bios *bios, int idx, u8 *ver, u8 *len, info->vid = nvbios_rd08(bios, volt + 0x01) >> 2; break; case 0x40: + break; case 0x50: + info->voltage = nvbios_rd32(bios, volt) & 0x001fffff; + info->vid = (nvbios_rd32(bios, volt) >> 23) & 0xff; break; } return volt; -- 2.8.1
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
...irror causes zpool command to hang with following kernel stack trace: PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0 stack pointer for thread fffffe84d34b4920: fffffe8001c30c10 [ fffffe8001c30c10 _resume_from_idle+0xf8() ] swtch+0x110() cv_wait+0x68() spa_config_enter+0x50() spa_vdev_enter+0x2a() spa_vdev_detach+0x39() zfs_ioc_vdev_detach+0x48() zfsdev_ioctl+0x13e() cdev_ioctl+0x1d() spec_ioctl+0x50() fop_ioctl+0x25() ioctl+0xac() sys_syscall32+0x101() Other zpool commands, df, format all waiting on a mutex lock spa_namespace_loc...
2006 Sep 11
0
Strange kernel message w/ Promise TX4
...TX4 SATA controller. The disks are RAIDed using Linux md with LVM wrapped around the md''s for volume management. I currently am running 3 domUs, each with LVM backed vbd''s. This server has been up for nearly a month. I am getting these types of kernel messages: ata4: status=0x50 { DriveReady SeekComplete } sdc: Current: sense key=0x0 ASC=0x0 ASCQ=0x0 ata2: status=0x50 { DriveReady SeekComplete } sdb: Current: sense key=0x0 ASC=0x0 ASCQ=0x0 ata1: status=0x50 { DriveReady SeekComplete } sda: Current: sense key=0x0 ASC=0x0 ASCQ=0x0 ata4: status=0x50 { DriveReady S...
2006 Aug 16
1
DMA in HVM guest on x86_64
I''ve been following unstable day to day with mercurial but I''m still having a problem with my HVM testing. I using the i686 Centos + Bluecurve isntaller and I get the following error in the guest during disk formatting: <4>hda: dma_timer_expiry: dma status == 0x21 <4>hda: DMA timeout error <4>hda: dma timeout error: status=0x58 { DriveReady SeekComplete