similar to: RE: Badness in softirq.c / no modules loaded /relatedtonetwork interface

Displaying 20 results from an estimated 10000 matches similar to: "RE: Badness in softirq.c / no modules loaded /relatedtonetwork interface"

2005 Jun 02
0
RE: Badness in softirq.c / no modules loaded / relatedtonetwork interface
Hello all ! I get the same effect when mounting nfs-exported directories from dom0 in domU. Every mount/umount/showmount command in domU produces the message in the dom0 syslog. I run 2.0.6 compiled from source, with 2.6 dom0 and 2.4 domU on a P4 HT 3.2Ghz. Perhaps this helps to track the problem down. Greetings, Martin ---------- The messages : (dom0 hostname is zen, domU hostname is ftp,
2005 Jul 06
2
Badness in local_bh_enable at kernel/softirq.c:140
I''m getting subj trying to run linux-iscsi-4.0.2 on domain0. I tried xen-2.0.6, xen-2-test and xen-3-devel. The same results. I found similar complaints regarding this problem like below: http://www.ussg.iu.edu/hypermail/linux/kernel/0503.1/1622.html http://www.ussg.iu.edu/hypermail/linux/kernel/0503.1/1621.html Not sure if it is xen or linux-iscsi related bug. Any ideas how to cure it
2008 Dec 02
1
CentOS-4 Xen kernel with low RAM and Badness in local_bh_enable at kernel/softirq.c:141
I have small xen VM running centos4 which acts as a router/firewall, and has been working fine for over 1.5 years with 32MB of RAM and a kernel I either got from xensource.org or built myself from their sources. (centos 4 didn't have a xen kernel back then) I lost the kernel to a corrupted disk and decided to use the centos provided xen kernel. All these months 32MB + 64MB Swap was more than
2005 Apr 11
2
RE: Badness in local_bh_enable
> Badness in local_bh_enable at kernel/softirq.c:140 > [<c011fb12>] local_bh_enable+0x82/0x90 [<c031fcfd>] > skb_checksum+0x13d/0x2d0 [<c016ac5c>] __pollwait+0x8c/0xd0 > [<c0360d3a>] udp_poll+0x9a/0x160 [<c031af49>] > sock_poll+0x29/0x40 [<c016b635>] do_pollfd+0x95/0xa0 > [<c016b6aa>] do_poll+0x6a/0xd0 [<c016b871>]
2005 Feb 13
2
TDMOE + kernel badness
Anybody have any issues running tdmoe on kernel 2.6+? I've got Suse 9.1 + 9.2 running 2.6.5 and 2.6.8 respectively, and when I enable dynamic spans between them, both boxes dump something similar to: Badness in local_bh_enable at kernel/softirq.c:141 [<c0120768>] local_bh_enable+0x48/0x60 [<c02952b0>] dev_queue_xmit+0x230/0x240 [<c02a0980>] eth_header+0x0/0x140
2009 Jan 07
0
High softirq usage in Centos 5
Hi, I have a machine under heavy network traffic. Kernel is Centos kernel 2.6.18 SMP 32 bit. Ethernets are 05:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01) 05:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01) 06:01.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet
2006 Mar 15
3
softirq bound to vcpus
In "Understanding the Linux Kernel" 3rd edition, section 4.7 "Softirqs and Tasklets" it states: "Activation and execution [of defferable functions] are bound together: a deferrable function that has been activated by a given CPU must be executed on the same CPU. There is no self-evident reason suggesting that this rule is beneficial for system performance. Binding the
2006 Nov 23
1
BUG: warning at kernel/softirq.c:141
Hello ext3-users, we have an oopsy situation here: we have 4 machines: 3 client nodes, 1 master: the master holds a fairly big repository of small files. The repo's current size is ~40GB with ~1.2 M files in ~100 directories. Now, we like to rsync changes from the master to the client nodes, which is working perfectly for 2 nodes, but our 3rd node oopses "sometimes", rendering
2010 Aug 02
4
softirq warnings when calling dev_kfree_skb_irq - bug in conntrack?
Hi, I''m seeing this in the current linux-next tree: ------------[ cut here ]------------ WARNING: at kernel/softirq.c:143 local_bh_enable+0x40/0x87() Modules linked in: xt_state dm_mirror dm_region_hash dm_log microcode [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.35-rc6-next-20100729+ #29 Call Trace: <IRQ> [<ffffffff81030de3>]
2009 Sep 03
1
CTDB: Clustered NFS, reboot, requires me to exportfs -r(a)
Hi Samba, I hope you are doing well. I run a cifs / nfs CTDB clustered NAS solution, and I find that when I reboot any of the nodes in the cluster, I must re-export the nfs mounts so they show up properly. Perhaps this is a general linux nfs bug and I am barking up the wrong tree, but I haven't found any problem / solution mentioning this as of yet besides my own known workaround
2016 Oct 03
0
mount.nfs: an incorrect mount option was specified
On Sun, Oct 02, 2016 at 11:42:41PM -0400, Tim Dunphy wrote: > Hey guys, > > My NFS server has been working really well for a long time now. Both > client and server run CentOS 7.2. > > However when I just had to remount one of my home directories on an NFS > client, I'm now getting the error when I run mount -a > > mount.nfs: an incorrect mount option was
2016 Oct 03
2
mount.nfs: an incorrect mount option was specified
Hey guys, My NFS server has been working really well for a long time now. Both client and server run CentOS 7.2. However when I just had to remount one of my home directories on an NFS client, I'm now getting the error when I run mount -a mount.nfs: an incorrect mount option was specified This is the corresponding line I have in my fstab file on the client:
2009 May 26
1
disabling showmount -e behaviour
I must admit that this question originates in the context of Sun''s Storage 7210 product, which impose additional restrictions on the kind of knobs I can turn. But here''s the question: suppose I have an installation where ZFS is the storage for user home directories. Since I need quotas, each directory gets to be its own filesystem. Since I also need these homes to be accessible
2012 Jan 05
9
[PATCHv2 0 of 2] Deal with IOMMU faults in softirq context.
Hello everyone, Reposting with after having applied the (minor) fixes suggested by Wei and Jan. Allen, if you can tell us what you think about this, or suggest someone else to ask some feedback to, if you''re no longer involved with VT-d, that would be great! :-) -- As already discussed here [1], dealing with IOMMU faults in interrupt context may cause nasty things to happen, up to
2012 Dec 20
0
nfs.export-dirs
hi All, # gluster volume info data Volume Name: data Type: Distribute Volume ID: d74ab958-1599-4e82-9358-1eea282d4025 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: tipper:/mnt/brick1 Options Reconfigured: nfs.export-dirs: on nfs.export-volumes: off nfs.export-dir: /install nfs.port: 2049 nfs.ports-insecure: off nfs.disable: off nfs.mount-udp: on nfs.addr-namelookup: off
2017 Aug 09
0
gluster under the hood
Hi, I am using glusterfs 3.10.3 on my CentOS 7.3 Kernel 3.10.0-514. I have 2 machines as server nodes on my volume and 1 client machine CentOS 7.2 with the same kernel. >From Client: [root at CentOS7286-64 ~]# rpm -qa *gluster* glusterfs-api-3.7.9-12.el7.centos.x86_64 glusterfs-libs-3.7.9-12.el7.centos.x86_64 glusterfs-fuse-3.7.9-12.el7.centos.x86_64
2005 Dec 23
1
RE: dom0 Errors
Which version of Xen? This usually happens when someone has built a module and forgotten to do "make ARCH=xen" Ian > I was wondering if anyone can make sense of these errors in > the message log: > > Dec 23 14:14:31 localhost kernel: Badness in local_bh_enable > at kernel/softirq. > Dec 23 14:14:31 localhost kernel: [local_bh_enable+130/144] >
2013 Aug 06
2
NFS - No lista dos directorios especificos
Estimados. Actualmente, estoy tirando los backups a un server que comparte por NFS los directorios. Me encuentro con un problema, por ahora en dos servidores, en el cual no puedo ver directorios exportados. Server: LaCie 5Big Network (sistema propietario, sin acceso root. http://www.lacie.com/la/products/product.htm?id=10485) Cliente1: root at server [/home/cpbackuptmp]# uname -a Linux
2006 Apr 28
2
NFS dont''s start at boottime
Hallo, I''m running Xen 3 (xen-unstable tarball from 4/27) with kernel 2.6.16   and Debian Etch in all domains. Most things seem to work very well. But I have some problems with NFS in domU. The NFS-Server started at boot time but I cant''t access to the shares from a client. --> sigma:~# showmount -a xen-samba rpc mount dump: RPC: Unable to receive; errno = Connection
2011 Aug 12
2
sr_backened_failure_73
hi, i have created the nfs storage ,from the two system i have installed the xcp and in order to created the shared storage , in one system i made as a server and other other client , and the client i can able to mount the shared storage. server 10.10.34.133 client 10.10.33.220 and from the client i can able to showmount -e 10.10.34.133 it is showing the exports list local host.local domain and