search for: swappers

Displaying 20 results from an estimated 343 matches for "swappers".

Did you mean: stappers
2012 Oct 18
2
swapper: page allocation failure. order:1, mode:0x20
I see this ocasionally on one of my CentOS 6.3 x64 systems: Oct 18 03:10:52 backup kernel: swapper: page allocation failure. order:1, mode:0x20 Oct 18 03:10:52 backup kernel: Pid: 0, comm: swapper Not tainted 2.6.32-279.9.1.el6.x86_64 #1 Oct 18 03:10:52 backup kernel: Call Trace: Oct 18 03:10:52 backup kernel: <IRQ> [<ffffffff8112789f>] ? __alloc_pages_nodemask+0x77f/0x940 Oct 18
2017 Aug 08
0
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
Never saw this email....Did anyone get it?? anyone know how to fix this?thanks again. From: KM <info4km at yahoo.com> To: CentOS mailing list <centos at centos.org> Sent: Monday, August 7, 2017 11:26 AM Subject: Re: [CentOS] BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0] All,This happens on all of our CentOS 7 VMs.? but as stated in the email trail, the file
2014 Feb 06
1
"BUG: soft lockup - CPU#n stuck for X s! [swapper:0]"
I just updated my quad-processor X64 machine (AMD Phenom(tm) II X4 945 Processor) to the latest CentOS 5 xen kernel (2.6.18-371.4.1.el5xen) and I am getting occasional "BUG: soft lockup - CPU#n stuck for X s! [swapper:0]" messages. I did some net searching, and found some bugzilla reports (https://bugzilla.redhat.com/show_bug.cgi?id=649519 and http://bugs.centos.org/view.php?id=4488),
2017 Aug 07
4
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
All,This happens on all of our CentOS 7 VMs.? but as stated in the email trail, the file softlockup_thresh does not exist.? Should it be added?? What is the best way to get rid of this behavior. Thanks in advance and sorry if I missed something along the way.KM From: correomm <correomm at gmail.com> To: CentOS mailing list <centos at centos.org> Sent: Thursday, August 18,
2012 Dec 13
7
HVM bug: system crashes after offline online a vcpu
Hi Konrad I encountered a bug when trying to bring offline a cpu then online it again in HVM. As I''m not very familiar with HVM stuffs I cannot come up with a quick fix. The HVM DomU is configured with 4 vcpus. After booting into command prompt, I do following operations. # echo 0 > /sys/devices/system/cpu/cpu3/online # echo 1 > /sys/devices/system/cpu/cpu3/online With
2016 Dec 08
0
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
Not sure if this was the last email on this.? If not ignore me. However I found a post for new operating systems that says to set the watchdog_thresh value instead of softlockup_thresh.? http://askubuntu.com/questions/592412/why-is-there-no-proc-sys-kernel-softlockup-thresh this is an Ubuntu post, but on my CentOS 7 system this parameter exists, and softlockup_thresh does not.??I have set it but
2008 Jul 16
7
Please help: domU becomes unresponsive
Hi all, sorry to intrude on xen-devel, but I think I need direction from the expertise here. I''ve admin''d Xen servers of various flavors for a couple years, but never seen this before. After a period ranging from several hours to several days, my primary database and development DomU completely locks up. Net disconnects, but CPU(sec) continues to tick in xentop. No errors, and
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All, We've had a report[1] of the virt_blk driver causing a lot of spew because it's calling a sleeping function from an invalid context. The backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. The reporter is on CC and can give you relevant details. josh [1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805 [drm] Initialized bochs-drm 1.0.0 20130925 for
2014 Jun 27
2
virt_blk BUG: sleeping function called from invalid context
Hi All, We've had a report[1] of the virt_blk driver causing a lot of spew because it's calling a sleeping function from an invalid context. The backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. The reporter is on CC and can give you relevant details. josh [1] https://bugzilla.redhat.com/show_bug.cgi?id=1113805 [drm] Initialized bochs-drm 1.0.0 20130925 for
2016 Aug 18
0
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
2016-08-18 12:39 GMT-04:00 correomm <correomm at gmail.com>: > This bug is reported only on the VM's with CentOS 7 running on on VMware > ESXi 5.1. > The vSphere performance graph shows high CPU consume and disk activity only > on VM's with CentOS 7. Sometimes I can not connect remotely with ssh > (timeout error). > I'm also seeing those errors in several
2018 Apr 24
0
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
On Mon, 2017-08-07 at 15:26 +0000, KM wrote: > All,This happens on all of our CentOS 7 VMs.? but as stated in the > email trail, the file softlockup_thresh does not exist.? Should it be > added?? What is the best way to get rid of this behavior. > Thanks in advance and sorry if I missed something along the way.KM Yes, I see this behavior as well. Never have found a solution - other
2018 Apr 24
0
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
On 24 April 2018 at 17:16, <m.roth at 5-cent.us> wrote: > Adam Tauno Williams wrote: >> On Mon, 2017-08-07 at 15:26 +0000, KM wrote: >>> All,This happens on all of our CentOS 7 VMs. but as stated in the >>> email trail, the file softlockup_thresh does not exist. Should it be >>> added? What is the best way to get rid of this behavior. >>>
2016 Aug 18
2
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
> 2016-08-18 12:39 GMT-04:00 correomm <correomm at gmail.com>: > >> This bug is reported only on the VM's with CentOS 7 running on on VMware >> ESXi 5.1. >> The vSphere performance graph shows high CPU consume and disk activity only >> on VM's with CentOS 7. Sometimes I can not connect remotely with ssh >> (timeout error). >> > I'm
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all, While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a number of splats in the block layer: * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in jbd2_trans_will_send_data_barrier * BUG: sleeping function called from invalid context at mm/mempool.c:320 * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750 ... I've included the
2018 Feb 23
2
v4.16-rc2: virtio-block + ext4 lockdep splats / sleeping from invalid context
Hi all, While fuzzing arm64/v4.16-rc2 with syzkaller, I simultaneously hit a number of splats in the block layer: * inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-R} usage in jbd2_trans_will_send_data_barrier * BUG: sleeping function called from invalid context at mm/mempool.c:320 * WARNING: CPU: 0 PID: 0 at block/blk.h:297 generic_make_request_checks+0x670/0x750 ... I've included the
2011 Oct 25
1
Page allocation failure
Dear list, I am seeing an error across multiple machines during heavy I/O, either disk or network. The VMs are on different Intel CPUs (Core 2 Quad, Core i5, Xeon) with varying boards (Abit, Asus, Supermicro). Machines that get this error are running either BackupPC, Zabbix (MySQL) or SABNZBd. I can also reproduce the error on the Supermicros with a looped wget of an ubuntu ISO as they are
2018 Apr 24
2
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
Adam Tauno Williams wrote: > On Mon, 2017-08-07 at 15:26 +0000, KM wrote: >> All,This happens on all of our CentOS 7 VMs.? but as stated in the >> email trail, the file softlockup_thresh does not exist.? Should it be >> added?? What is the best way to get rid of this behavior. >> Thanks in advance and sorry if I missed something along the way.KM > > Yes, I see this
2016 Aug 18
4
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
This bug is reported only on the VM's with CentOS 7 running on on VMware ESXi 5.1. The vSphere performance graph shows high CPU consume and disk activity only on VM's with CentOS 7. Sometimes I can not connect remotely with ssh (timeout error). The details of last issues was reported to retrace.fedoraproject.org. ?Do you have a hint? [root at vmguest ~]# abrt-cli list id
2014 Jun 29
0
virt_blk BUG: sleeping function called from invalid context
On Fri, Jun 27, 2014 at 07:57:38AM -0400, Josh Boyer wrote: > Hi All, > > We've had a report[1] of the virt_blk driver causing a lot of spew > because it's calling a sleeping function from an invalid context. The > backtrace is below. This is with kernel v3.16-rc2-69-gd91d66e88ea9. Hi Jens, pls see below - it looks like the call to blk_mq_end_io from IRQ context is
2018 Jan 05
4
Centos 6 2.6.32-696.18.7.el6.x86_64 does not boot in Xen PV mode
Problems start before any of the kaiser code executes, though it could still be related to CONFIG_KAISER since that has effects beyond kaiser.c. --- (early) Initializing cgroup subsys cpuset (early) Initializing cgroup subsys cpu (early) Linux version 2.6.32-696.18.7.el6.x86_64 (mockbuild at c1bl.rdu2.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1 SMP Thu Jan 4 17:31:22 UTC