similar to: High softirq usage in Centos 5

Displaying 11 results from an estimated 11 matches similar to: "High softirq usage in Centos 5"

2005 Jun 02
0
RE: Badness in softirq.c / no modules loaded /relatedtonetwork interface
> I get the same effect when mounting nfs-exported directories > from dom0 in domU. > Every mount/umount/showmount command in domU produces the > message in the dom0 syslog. > > I run 2.0.6 compiled from source, with 2.6 dom0 and 2.4 domU > on a P4 HT 3.2Ghz. This is a native Linux bug. A patch has been submitted upstream, but is already in our 2.0-testing and unstable
2005 Jul 06
2
Badness in local_bh_enable at kernel/softirq.c:140
I''m getting subj trying to run linux-iscsi-4.0.2 on domain0. I tried xen-2.0.6, xen-2-test and xen-3-devel. The same results. I found similar complaints regarding this problem like below: http://www.ussg.iu.edu/hypermail/linux/kernel/0503.1/1622.html http://www.ussg.iu.edu/hypermail/linux/kernel/0503.1/1621.html Not sure if it is xen or linux-iscsi related bug. Any ideas how to cure it
2005 Jun 02
0
RE: Badness in softirq.c / no modules loaded / relatedtonetwork interface
Hello all ! I get the same effect when mounting nfs-exported directories from dom0 in domU. Every mount/umount/showmount command in domU produces the message in the dom0 syslog. I run 2.0.6 compiled from source, with 2.6 dom0 and 2.4 domU on a P4 HT 3.2Ghz. Perhaps this helps to track the problem down. Greetings, Martin ---------- The messages : (dom0 hostname is zen, domU hostname is ftp,
2008 Dec 02
1
CentOS-4 Xen kernel with low RAM and Badness in local_bh_enable at kernel/softirq.c:141
I have small xen VM running centos4 which acts as a router/firewall, and has been working fine for over 1.5 years with 32MB of RAM and a kernel I either got from xensource.org or built myself from their sources. (centos 4 didn't have a xen kernel back then) I lost the kernel to a corrupted disk and decided to use the centos provided xen kernel. All these months 32MB + 64MB Swap was more than
2006 Nov 23
1
BUG: warning at kernel/softirq.c:141
Hello ext3-users, we have an oopsy situation here: we have 4 machines: 3 client nodes, 1 master: the master holds a fairly big repository of small files. The repo's current size is ~40GB with ~1.2 M files in ~100 directories. Now, we like to rsync changes from the master to the client nodes, which is working perfectly for 2 nodes, but our 3rd node oopses "sometimes", rendering
2010 Aug 02
4
softirq warnings when calling dev_kfree_skb_irq - bug in conntrack?
Hi, I''m seeing this in the current linux-next tree: ------------[ cut here ]------------ WARNING: at kernel/softirq.c:143 local_bh_enable+0x40/0x87() Modules linked in: xt_state dm_mirror dm_region_hash dm_log microcode [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.35-rc6-next-20100729+ #29 Call Trace: <IRQ> [<ffffffff81030de3>]
2006 Mar 15
3
softirq bound to vcpus
In "Understanding the Linux Kernel" 3rd edition, section 4.7 "Softirqs and Tasklets" it states: "Activation and execution [of defferable functions] are bound together: a deferrable function that has been activated by a given CPU must be executed on the same CPU. There is no self-evident reason suggesting that this rule is beneficial for system performance. Binding the
2012 Jan 05
9
[PATCHv2 0 of 2] Deal with IOMMU faults in softirq context.
Hello everyone, Reposting with after having applied the (minor) fixes suggested by Wei and Jan. Allen, if you can tell us what you think about this, or suggest someone else to ask some feedback to, if you''re no longer involved with VT-d, that would be great! :-) -- As already discussed here [1], dealing with IOMMU faults in interrupt context may cause nasty things to happen, up to
2008 Sep 12
0
[PATCH] FLush pending softirqs when cpu offline
Hi, Keir, Thanks for checking in cpu online/offline support. Another thought inspired by Kevin, due to the time sequence that different cpus enter the stop machine context, there is a small window that some kind of softirqs (say softirq_A) are issued to the dying cpu right after the dying cpu has already handled softirq_A in do_softirq before entering stop_machine softirq. So this softirq_A
2012 May 16
1
[PATCH] virtio_net: invoke softirqs after __napi_schedule
__napi_schedule might raise softirq but nothing causes do_softirq to trigger, so it does not in fact run. As a result, the error message "NOHZ: local_softirq_pending 08" sometimes occurs during boot of a KVM guest when the network service is started and we are oom: ... Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for
2012 May 16
1
[PATCH] virtio_net: invoke softirqs after __napi_schedule
__napi_schedule might raise softirq but nothing causes do_softirq to trigger, so it does not in fact run. As a result, the error message "NOHZ: local_softirq_pending 08" sometimes occurs during boot of a KVM guest when the network service is started and we are oom: ... Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for