similar to: Lockup problem, watchdog, other ways to debug?

Displaying 20 results from an estimated 8000 matches similar to: "Lockup problem, watchdog, other ways to debug?"

2014 Jun 28
4
[Bug 80627] New: [NVE6] 'HUB_INIT timed out' 'Watchdog detected hard LOCKUP on cpu 7'
https://bugs.freedesktop.org/show_bug.cgi?id=80627 Priority: medium Bug ID: 80627 Assignee: nouveau at lists.freedesktop.org Summary: [NVE6] 'HUB_INIT timed out' 'Watchdog detected hard LOCKUP on cpu 7' QA Contact: xorg-team at lists.x.org Severity: normal Classification: Unclassified
2013 Sep 18
1
How to use watchdog daemon with hardware watchdog driver interface?
Good morning! On a CentOS 6.4 / 64 bit server I have installed the watchdog 5.5 package. The rpm -qi watchdog states: The watchdog program can be used as a powerful software watchdog daemon or may be alternately used with a hardware watchdog device such as the IPMI hardware watchdog driver interface to a resident Baseboard Management Controller (BMC). ... This configuration file is also used to
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
I''m having an issue forwarding through an Intel on-board graphics adapter. This is on a Dell Optiplex 780 with 8GB of RAM. The pass-through works perfectly fine if I have 2GB of RAM assigned to the HVM domU. If I try to assign 3GB or 4GB of RAM, I get the following on the console: [ 41.222073] br0: port 2(vif1.0) entering forwarding state [ 41.269854] (cdrom_add_media_watch()
2010 Oct 28
0
HVM + IGD Graphics + 4GB RAM = Soft Lockup
I''m having an issue forwarding through an Intel on-board graphics adapter. This is on a Dell Optiplex 780 with 8GB of RAM. The pass-through works perfectly fine if I have 2GB of RAM assigned to the HVM domU. If I try to assign 3GB or 4GB of RAM, I get the following on the console: [ 41.222073] br0: port 2(vif1.0) entering forwarding state [ 41.269854] (cdrom_add_media_watch()
2016 Mar 10
2
Soft lockups with Xen4CentOS 3.18.25-18.el6.x86_64
I've been running 3.18.25-18.el6.x86_64 + our build of xen 4.4.3-9 on one host for the last couple of weeks and have gotten several soft lockups within the last 24 hours. I am posting here first in case anyone else has experienced the same issue. Here is the first instance: sched: RT throttling activated NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] Modules linked in:
2017 Nov 24
0
Changing performance.parallel-readdir to on causes CPU soft lockup and very high load all glusterd nodes
Hi, Just to update this thread. We updated from Gluster 3.12.2 to 3.12.3 which resolved the issue it seems. I checked the changelog but don't see anything that looks like this issue, but I'm glad it seems like it's OK now. Niels Hendriks On 14 November 2017 at 09:42, Niels Hendriks <niels at nuvini.com> wrote: > Hi, > > We're using a 3-node setup where GlusterFS
2017 Nov 14
3
Changing performance.parallel-readdir to on causes CPU soft lockup and very high load all glusterd nodes
Hi, We're using a 3-node setup where GlusterFS is running as both a client and a server with a fuse mount-point. We tried to change the performance.parallel-readdir setting to on for a volume, but after that the load on all 3 nodes skyrocketed due to the glusterd process and we saw CPU soft lockup errors in the console. I had to completely bring down/reboot all 3 nodes and disable the
2015 Mar 30
1
Lockup/panic caused by nouveau_fantog_update recursion
Hi, I used to experience kernel panics caused by CPUx hard lockup once almost everyday. I'm running Ubuntu 14.10 on vanilla linux kernel 3.19.2 on Core i7-3770 with Gallium 0.4 on NV108. The panic log looked like: [ 9227.509744] ------------[ cut here ]------------ [ 9227.509750] WARNING: CPU: 0 PID: 0 at kernel/watchdog.c:290 watchdog_overflow_callback+0x92/0xc0() [ 9227.509751] Watchdog
2014 May 21
0
kernel: NETDEV WATCHDOG: eth0 (r8169): transmit queue 0 timed out
Hi, anybody know how to fix this. May 20 12:16:15 wolfpac kernel: NETDEV WATCHDOG: eth0 (r8169): transmit queue 0 timed out May 20 12:16:15 wolfpac kernel: Modules linked in: pf_ring(U) af_key iptable_nat ipt_LOG iptable_filter ip_tables nf_conntrack_ipv6 nf_defrag_ipv6 xt_state ip6t_LOG xt_limit ip6table_filter ip6_tables bridge stp llc nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
2014 Aug 04
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
Hey, op 04-08-14 13:57, Christian K?nig schreef: > Am 04.08.2014 um 10:55 schrieb Maarten Lankhorst: >> op 04-08-14 10:36, Christian K?nig schreef: >>> Hi Maarten, >>> >>> Sorry for the delay. I've got way to much todo recently. >>> >>> Am 01.08.2014 um 19:46 schrieb Maarten Lankhorst: >>>> On 01-08-14 18:35, Christian K?nig
2014 Aug 04
2
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
> It'a pain to deal with gpu reset. Yeah, well that's nothing new. > I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and only running the delayed work when !lockup. > But this meant that the timeout was useless to add. I think the cleanest is keeping the v2 patch, because potentially any waiting code can be called
2009 Feb 02
4
Xen 3.3.0 cpu cache problems
Dear Xen users, I have a problem with "Xen 3.3.0". All domU (paravirt) only have "32 KB" of cache instead of "6144 KB" as listed in dom0. This is really noticeable under load since the system tends to be really slow. I did not have this problem with "Xen 3.2.1". I''m using the same domU configuration files for the new and the old installation.
2014 Aug 04
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
op 04-08-14 10:36, Christian K?nig schreef: > Hi Maarten, > > Sorry for the delay. I've got way to much todo recently. > > Am 01.08.2014 um 19:46 schrieb Maarten Lankhorst: >> >> On 01-08-14 18:35, Christian K?nig wrote: >>> Am 31.07.2014 um 17:33 schrieb Maarten Lankhorst: >>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst at
2014 Jul 31
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com> --- V1 had a nasty bug breaking gpu lockup recovery. The fix is not allowing radeon_fence_driver_check_lockup to take exclusive_lock, and kill it during lockup recovery instead. --- drivers/gpu/drm/radeon/radeon.h | 3 + drivers/gpu/drm/radeon/radeon_device.c | 5 + drivers/gpu/drm/radeon/radeon_fence.c |
2009 Dec 27
1
[PATCH] drm/nouveau: create function for "dealing" with gpu lockup
It's mostly a cleanup, but in nv50_fbcon_accel_init gpu lockup message was printed, but HWACCEL_DISBALED flag was not set. Signed-off-by: Marcin Slusarz <marcin.slusarz at gmail.com> --- drivers/gpu/drm/nouveau/nouveau_fbcon.c | 15 +++++++++++---- drivers/gpu/drm/nouveau/nouveau_fbcon.h | 2 ++ drivers/gpu/drm/nouveau/nv04_fbcon.c | 15 +++++----------
2014 Aug 04
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
op 04-08-14 16:37, Christian K?nig schreef: >> It'a pain to deal with gpu reset. > > Yeah, well that's nothing new. > >> I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and only running the delayed work when !lockup. >> But this meant that the timeout was useless to add. I think the cleanest is keeping
2014 Aug 04
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
op 04-08-14 16:45, Christian K?nig schreef: > Am 04.08.2014 um 16:40 schrieb Maarten Lankhorst: >> op 04-08-14 16:37, Christian K?nig schreef: >>>> It'a pain to deal with gpu reset. >>> Yeah, well that's nothing new. >>> >>>> I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and
2014 Aug 04
0
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
op 04-08-14 17:04, Christian K?nig schreef: > Am 04.08.2014 um 16:58 schrieb Maarten Lankhorst: >> op 04-08-14 16:45, Christian K?nig schreef: >>> Am 04.08.2014 um 16:40 schrieb Maarten Lankhorst: >>>> op 04-08-14 16:37, Christian K?nig schreef: >>>>>> It'a pain to deal with gpu reset. >>>>> Yeah, well that's nothing new.
2019 May 18
0
Fwd: Linux (RHEL 7.6 with OSP 14) Bugs
Dears, I have the following Bugs that crashed my VM, I reported it to RH, they didn't answer, and banned my developer account, the Bug is when you disable the network on RHEL with OSP 14 installed all in one, it crashes the system, I had a 12GB RAM, with 8 CPUs on the VM, and I found out that this crash report pissed off someone in RH, because they called me, and said what do you want from
2014 Aug 04
2
[PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
Am 04.08.2014 um 16:40 schrieb Maarten Lankhorst: > op 04-08-14 16:37, Christian K?nig schreef: >>> It'a pain to deal with gpu reset. >> Yeah, well that's nothing new. >> >>> I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and only running the delayed work when !lockup. >>> But this