Displaying 20 results from an estimated 100 matches similar to: "No buffer space available - loses network connectivity"
2011 Sep 01
0
No buffer space available - loses network connectivity
Hi,
I have a centos 5.6 xen vps which loses network connectivity once in a
while with following error.
=========================================
-bash-3.2# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 56467 0% 0.19K 3486 21
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
Yes, the clients are doing lots of creates.
But my question is, if this is a memory leak, why does ocfs2 eat up the
memory as soon as the clients start accessing the filesystem. Within
about 5-10 minutes all physical RAM is consumed but then the memory
consumption stops. It does not go into swap.
Do you happen to know what version of ocfs2 has the fix?
If it was a leak would the process not be
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys,
My users are reporting some issues with memory on our lustre 1.8.1 clients.
It looks like when they submit a single job at a time the run time was about
4.5 minutes. However, when they ran multiple jobs (10 or less) on a client
with 192GB of memory on a single node the run time for each job was
exceeding 3-4X the run time for the single process. They also noticed that
the swap space
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
Hello.
I''m running Xen 3.0.1 on the following hardware:
Dell SC1425 Rack Server
1x Intel Xeon 2.8GHz (64bit on 32bit OS/Xen) Hyper-threading enabled
2GB memory
80+250GB SATA hard drives (sda, sdb)
Debian Sarge on dom0. Different Debian versions on virtual servers.
This is running as a virtualized web server. It''s hosting four virtual
servers. 256MB memory is reserved to dom0
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via
scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and
OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a
few days after 1.2.4 was released. I had reported on the mailing list
that my developers were happy, and things seemed faster. However, twice
in that time, the cluster has gone down due
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
Under both XEN 3.0.4 (2.6.16.33) and XEN 3.1 (2.6.18), I can make the OOM killer appear in dom0 of my server by doing heavy I/O from within a VM. If I start 5 VMs on the same server, each VM doing constant I/O over its boot disk (read/write a 2GB file), after about 30 minutes the OOM killer appears in dom0 and starts killing processes. This was observed using 256MB in dom0. If I bump the memory in
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Hi Wei,
I ran into the following problem when trying to boot another guest after less than a day of uptime.
(the system started 15 guests at boot already which went fine). dom0 is allocated a fixed 1536M.
Both host as pv guests run the same kernel, some hvm''s run a slightly older kernel (3.9 f.e.)
The are quite some granttable messages in xl dmesg, i also included these and a
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello,
I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance
8-NUMA configuration:
This is from hypervizor:
[root@hde10 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens,
Please in your spare time (if there is such a thing at a conference)
pull this branch:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10
for your v3.10 branch. Sorry for being so late with this.
<blurb>
It has the ''feature-max-indirect-segments'' implemented in both backend
and frontend. The current problem with the backend and
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram.
Up until (and including) 3.6, used-(buffers+cached) was roughly the same
as sum(rss) (taking shared into account). Now there is an approx 6G gap.
When the box first starts, it is clearly less swappy than with <= 3.6; I
can''t tell whether that is related. The reduced swappiness persists.
It seems to get worse when I update
2009 Apr 09
1
[Bridge] Out of memory problem
Hi, I'm using linux 2.6.21.5 and our kernel is freeze.
The problem is, if I create a Software bridge using $brctl command. and
add two interfaces say, eth0.0 and eth0.1 using
$brctl addbr br-lan
$brctl addif br-lan eth0.0
$brctl addif br-lan eth0.1
and when i send traffic from a host connected to one port to host
connected at other end, soon all the memory is dried up and and kernel
2008 Jul 13
2
2.6.18-92.1.6.el5xen -- 8GB missing?
Hello,
Last night I upgraded a server to CentOS 5.2. The server has 16GB of
RAM. Now that it's running 2.6.18-92.1.6.el5xen only 8GB is reported to
exist.
# cat /proc/meminfo
MemTotal: 8818688 kB
MemFree: 3730124 kB
Buffers: 202004 kB
Cached: 4086788 kB
SwapCached: 0 kB
Active: 1551480 kB
Inactive: 2958196 kB
HighTotal: 0 kB
HighFree:
2008 May 04
1
Segmentation fault in 3.63 on 16GB USB
Hi,
I bought a 16GB Transcend JetFlash V10 yesterday and am trying to
put the fc8 livecd on it (after filling it up with all sorts of
other junk :)
I've only plugged it into an fc8 machine (from unopened) - nothing else.
I also have a Toshiba U3 2GB USB.
The OS is fc8 fully updated except the kernel (2.6.23.15-137.fc8)
the livecd-iso-to-disk uses the command
"syslinux -d syslinux
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
Good Day All,
Today I looked at the dmesg log and I notice that the following messages
regarding mtrr_gran_size/mtrr_chunk_size.
I am currently running CentOS 6.3 and I installed CentOS 6.2 and 6.1 and I
was seeing the same errors. When I installed CentOS 5.8 on the same laptop
I do not see these errors.
$ lsb_release -a
LSB Version:
2011 Nov 10
13
dom0 - oom-killer - memory leak somewhere ?
Hello,
I work in a hosting company, we have tens of Xen dom0 running just fine,
but unfortunately we do have a few that get out of control.
Reported behaviour :
- dom0 uses more and more memory
- no process can be found using that memory
- at some point, oom killer kicks in, and kills everything, until even
ssh the box becomes hard
- when there is really no more process to kill, it crashes
2013 Jun 19
1
Weird I/O hangs (9.1R, arcsas, interrupt spikes on uhci0)
Hi,
very periodically, we see I/O hangs for about 10 seconds, roughly once per minute.
Each time this happens, the I/O rate simply drops to zero, and all disk access hangs; this is also very noticeable on the shell, for NFS clients etc. Everything else (networking, kernel, ?) seems to continue normally.
Environment: FreeBSD 9.1R GENERIC on amd64, using ZFS, on a ARC1320 PCIe with 24x Seagate
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss,
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user 0.001
sys 0.027
During sync zpool iostat and vmstat look like:
f3-1 504G 720G 370 859 995K 10.2M
misc 20.6M 52.0G 0 0
2007 Oct 04
1
[PATCH 0/5] Boot protocol changes
Hi guys
I gave these patches a try (on top of 2.6.23-rc9 plus the previously
submitted 2.6.24 patch set).
The last two seem to cause Badness on my system, whereby if I start a
guest (using the same bzImage as the host, as before) it seems to boot
OK, and the host system still superficially looks stable (my X session
is OK and I can interact with existing processes) but if I attempt to
launch any