similar to: No buffer space available - loses network connectivity

Displaying 14 results from an estimated 14 matches similar to: "No buffer space available - loses network connectivity"

2011 Sep 01
1
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
Hello. I''m running Xen 3.0.1 on the following hardware: Dell SC1425 Rack Server 1x Intel Xeon 2.8GHz (64bit on 32bit OS/Xen) Hyper-threading enabled 2GB memory 80+250GB SATA hard drives (sda, sdb) Debian Sarge on dom0. Different Debian versions on virtual servers. This is running as a virtualized web server. It''s hosting four virtual servers. 256MB memory is reserved to dom0
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
Yes, the clients are doing lots of creates. But my question is, if this is a memory leak, why does ocfs2 eat up the memory as soon as the clients start accessing the filesystem. Within about 5-10 minutes all physical RAM is consumed but then the memory consumption stops. It does not go into swap. Do you happen to know what version of ocfs2 has the fix? If it was a leak would the process not be
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
Under both XEN 3.0.4 (2.6.16.33) and XEN 3.1 (2.6.18), I can make the OOM killer appear in dom0 of my server by doing heavy I/O from within a VM. If I start 5 VMs on the same server, each VM doing constant I/O over its boot disk (read/write a 2GB file), after about 30 minutes the OOM killer appears in dom0 and starts killing processes. This was observed using 256MB in dom0. If I bump the memory in
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a few days after 1.2.4 was released. I had reported on the mailing list that my developers were happy, and things seemed faster. However, twice in that time, the cluster has gone down due
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Hi Wei, I ran into the following problem when trying to boot another guest after less than a day of uptime. (the system started 15 guests at boot already which went fine). dom0 is allocated a fixed 1536M. Both host as pv guests run the same kernel, some hvm''s run a slightly older kernel (3.9 f.e.) The are quite some granttable messages in xl dmesg, i also included these and a
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello, I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance 8-NUMA configuration: This is from hypervizor: [root@hde10 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2008 May 04
1
Segmentation fault in 3.63 on 16GB USB
Hi, I bought a 16GB Transcend JetFlash V10 yesterday and am trying to put the fc8 livecd on it (after filling it up with all sorts of other junk :) I've only plugged it into an fc8 machine (from unopened) - nothing else. I also have a Toshiba U3 2GB USB. The OS is fc8 fully updated except the kernel (2.6.23.15-137.fc8) the livecd-iso-to-disk uses the command "syslinux -d syslinux
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens, Please in your spare time (if there is such a thing at a conference) pull this branch: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10 for your v3.10 branch. Sorry for being so late with this. <blurb> It has the ''feature-max-indirect-segments'' implemented in both backend and frontend. The current problem with the backend and
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2013 Jun 19
1
Weird I/O hangs (9.1R, arcsas, interrupt spikes on uhci0)
Hi, very periodically, we see I/O hangs for about 10 seconds, roughly once per minute. Each time this happens, the I/O rate simply drops to zero, and all disk access hangs; this is also very noticeable on the shell, for NFS clients etc. Everything else (networking, kernel, ?) seems to continue normally. Environment: FreeBSD 9.1R GENERIC on amd64, using ZFS, on a ARC1320 PCIe with 24x Seagate