search for: __down

Displaying 20 results from an estimated 27 matches for "__down".

2006 Jan 02
1
2.6.15-rc6 OOPS
...5b4 000005b4 00000000 00000000 f71b9ea8 c1b70dc0 00000000 > kernel: 7fffffff c031d940 c1807400 00000000 998db100 003d099f c0300b20 f7aee530 > kernel: f7aee658 f71a63e0 f71a63e8 00000292 f7aee530 c02ba525 00000001 f7aee530 > kernel: Call Trace: > kernel: [<c02ba525>] __down+0x75/0xe0 > kernel: [<c0118d70>] default_wake_function+0x0/0x10 > kernel: [<c0172804>] __d_lookup+0xa4/0x110 > kernel: [<c02b8e8f>] __down_failed+0x7/0xc > kernel: [<c016bb42>] .text.lock.namei+0x8/0x1e6 > kernel: [<c0167f35>] do_lookup+0x85/0x90 &g...
2003 Oct 27
2
EXT3 deadlock in 2.4.22 and 2.4.23-pre7 - quota related?
...dev+89/108] [sys_quotactl+166/275] [system_call+51/56] So it is trying to start a transaction to update the atime on the quota file, and has a lock on some quota structures thanks to "read_dqblk". At the same time, "sync" is running: sync Call Trace: [__down+109/208] [__down_failed+8/12] [.text.lock.dquot+73/286] [ext3_sync_dquot+337/462] [vfs_quota_sync+102/372] [sync_dquots_dev+194/260] [fsync_dev+66/128] [sys_sync+7/16] [system_call+51/56] and has started an ext3 transaction (in ext3_sync_dquot) and is trying...
2002 Jun 13
1
Processes stuck in D state
...n unaffected. Since there's no kernel oops, I've provided the (gzipped) SysRQ-T output that was dumped by the kernel. You'll notice several sendmail processes stuck like this: sendmail D F0E31F9C 0 32680 436 328 32617 (NOTLB) Call Trace: [page_getlink+34/176] [__down+104/208] [__down_failed+8/12] [.text.lock.namei+53/1221] [link_path_walk+1822/2496] Call Trace: [<c0147402>] [<c01076f8>] [<c01078a4>] [<c0147730>] [<c014453e>] [path_release+16/48] [getname+94/160] [__user_walk+51/80] [sys_lstat64+20/112] [system_call+51/56]...
2007 Jul 24
0
mISDN & Asterisk 1.4: HFC-S card not responsive
...r. [<c1005d3a>] show_trace_log_lvl+0x1a/0x2f [<c10062e3>] show_trace+0x12/0x14 [<c100635e>] dump_stack+0x16/0x18 [<c10363c7>] __lock_acquire+0x118/0x835 [<c1036de1>] lock_acquire+0x61/0x80 [<c11f58b7>] _spin_lock_irqsave+0x33/0x43 [<c11f54f6>] __down+0x3c/0xbf [<c11f5332>] __down_failed+0xa/0x10 [<cd3bb083>] init_module+0x11b/0x15c [mISDN_core] [<c103d23c>] sys_init_module+0x173e/0x1899 [<c1005540>] syscall_call+0x7/0xb ======================= mISDNd: kernel daemon started (current:c6912030) mISDNd: test event...
2006 Nov 08
1
XFS Issues
...e000 0000010105329030 000000000055772a Nov 7 12:50:32 houla0 0002113f79759888 00000100dffca7f0 Nov 7 12:50:32 houla0 Call Trace:<ffffffffa004f448>{:qla2xxx:qla2x00_next+422} <ffffffff8024cc75>{elv_next_request+238} Nov 7 12:50:32 houla0 <ffffffff803094ab>{__down+147} <ffffffff80133da9>{default_wake_function+0} Nov 7 12:50:32 houla0 <ffffffff8030af43>{__down_failed+53} <ffffffffa02133d3>{:xfs:.text.lock.xfs_buf+5} Nov 7 12:50:32 houla0 <ffffffffa0212a1d>{:xfs:pagebuf_iostart +134} <ffffffffa0212a61>{:xfs:x...
2002 Jul 30
1
Disk Hangs with 2.4.18 and ext3
...n issue, see: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=68274 There is lots of memory available. using serial console and sysrq, we see the nfsd's are in the following state: nfsd D F5D74800 0 19884 1 19894 19885 (L-TLB) Call Trace: [<c0107849>] __down [kernel] 0x69 [<c01079f8>] __down_failed [kernel] 0x8 [<f8a8a00c>] .text.lock.vfs [nfsd] 0x5 [<c0118c39>] __wake_up [kernel] 0x49 [<f8a8d5e8>] nfsd3_proc_lookup [nfsd] 0xd8 [<f8a96b0c>] nfsd_procedures3 [nfsd] 0x6c [<f8a96b0c>] nfsd_procedures3 [nfsd] 0x6c [&...
2007 Feb 09
1
USB Problem
...fffffff88021505>] :usbcore:usb_kill_urb+0x72/0x121 Feb 8 19:09:39 mythbox kernel: [<ffffffff88028efd>] :usbcore:usbdev_ioctl+0x1201/0x155c Feb 8 19:09:39 mythbox kernel: [<ffffffff80221c91>] default_wake_function+0x0/0xe Feb 8 19:09:39 mythbox kernel: [<ffffffff803c6c0e>] __down_failed+0x35/0x3a Feb 8 19:09:39 mythbox kernel: [<ffffffff80270cc9>] do_ioctl+0x55/0x6b Feb 8 19:09:39 mythbox kernel: [<ffffffff80270f31>] vfs_ioctl+0x252/0x26b Feb 8 19:09:39 mythbox kernel: [<ffffffff80270f86>] sys_ioctl+0x3c/0x5e Feb 8 19:09:39 mythbox kernel: [<fff...
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato, When I tried running "opcontrol --start" (after previously running "opcontrol --start-daemon") in dom0, I get this error message: /usr/local/bin/opcontrol: line 1639: echo: write error: Operation not permitted and this message in the Xen console: (XEN) xenoprof: operation 9 failed for dom 0 (status : -1) It looks like opcontrol is trying to do this: echo 1 >
2009 Nov 06
18
xenoprof: operation 9 failed for dom0 (status: -1)
Renato, When I tried running "opcontrol --start" (after previously running "opcontrol --start-daemon") in dom0, I get this error message: /usr/local/bin/opcontrol: line 1639: echo: write error: Operation not permitted and this message in the Xen console: (XEN) xenoprof: operation 9 failed for dom 0 (status : -1) It looks like opcontrol is trying to do this: echo 1 >
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...0000006 000001022c7fc030 0000000000000001 >>> 00000100080f1a40 0000000000000246 00000101f6b435a8 >>> 0000000380136025 >>> 00000102270a1030 00000000000000d0 >>> Call Trace:<ffffffffa0216e79>{:lnet:LNetPut+1689} >>> <ffffffff8030e45f>{__down+147} >>> <ffffffff80134659>{default_wake_function+0} >>> <ffffffff8030ff7d>{__down_failed+53} >>> <ffffffffa04292e1>{:lustre:.text.lock.file+5} >>> <ffffffffa044b12e>{:lustre:ll_mdc_blocking_ast+798} >>> <fffffff...
2014 Apr 02
0
[PATCH v8 01/10] qspinlock: A generic 4-byte queue spinlock implementation
...| |--1.00%-- remove_wait_queue | |--0.56%-- pagevec_lru_move_fn |--5.39%-- _raw_spin_lock_irq | |--82.05%-- rwsem_down_read_failed | |--10.48%-- rwsem_down_write_failed | |--4.24%-- __down | |--2.74%-- __schedule --0.28%-- [...] 2.20% reaim [kernel.kallsyms] [k] memcpy 1.84% reaim [unknown] [.] 0x000000000041517b 1.77% reaim [kernel.kallsyms] [k] _raw_spin_lock |--21.08%-- xlog_cil_insert_items...
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...| |--1.00%-- remove_wait_queue | |--0.56%-- pagevec_lru_move_fn |--5.39%-- _raw_spin_lock_irq | |--82.05%-- rwsem_down_read_failed | |--10.48%-- rwsem_down_write_failed | |--4.24%-- __down | |--2.74%-- __schedule --0.28%-- [...] 2.20% reaim [kernel.kallsyms] [k] memcpy 1.84% reaim [unknown] [.] 0x000000000041517b 1.77% reaim [kernel.kallsyms] [k] _raw_spin_lock |--21.08%-- xlog_cil_insert_items...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...| |--1.00%-- remove_wait_queue | |--0.56%-- pagevec_lru_move_fn |--5.39%-- _raw_spin_lock_irq | |--82.05%-- rwsem_down_read_failed | |--10.48%-- rwsem_down_write_failed | |--4.24%-- __down | |--2.74%-- __schedule --0.28%-- [...] 2.20% reaim [kernel.kallsyms] [k] memcpy 1.84% reaim [unknown] [.] 0x000000000041517b 1.77% reaim [kernel.kallsyms] [k] _raw_spin_lock |--21.08%-- xlog_cil_insert_items...
2016 Jul 12
6
[PATCH] drm/nouveau/fbcon: fix deadlock with FBIOPUT_CON2FBMAP
The FBIOPUT_CON2FBMAP ioctl takes a console_lock(). When this is called while nouveau was runtime suspended, a deadlock would occur due to nouveau_fbcon_set_suspend also trying to obtain console_lock(). Fix this by delaying the drm_fb_helper_set_suspend call. Based on the i915 code (which was done for performance reasons though). Cc: Chris Wilson <chris at chris-wilson.co.uk> Cc: Daniel
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized