similar to: Slab memory usage on dom0 increases by 128MB/day

Displaying 20 results from an estimated 2000 matches similar to: "Slab memory usage on dom0 increases by 128MB/day"

2011 Sep 01
0
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2011 Sep 01
1
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2007 Feb 15
2
Re: [Linux-HA] OCFS2 - Memory hog?
Yes, the clients are doing lots of creates. But my question is, if this is a memory leak, why does ocfs2 eat up the memory as soon as the clients start accessing the filesystem. Within about 5-10 minutes all physical RAM is consumed but then the memory consumption stops. It does not go into swap. Do you happen to know what version of ocfs2 has the fix? If it was a leak would the process not be
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
Under both XEN 3.0.4 (2.6.16.33) and XEN 3.1 (2.6.18), I can make the OOM killer appear in dom0 of my server by doing heavy I/O from within a VM. If I start 5 VMs on the same server, each VM doing constant I/O over its boot disk (read/write a 2GB file), after about 30 minutes the OOM killer appears in dom0 and starts killing processes. This was observed using 256MB in dom0. If I bump the memory in
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram. Up until (and including) 3.6, used-(buffers+cached) was roughly the same as sum(rss) (taking shared into account). Now there is an approx 6G gap. When the box first starts, it is clearly less swappy than with <= 3.6; I can''t tell whether that is related. The reduced swappiness persists. It seems to get worse when I update
2019 Jul 30
1
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 03:14:30PM +0200, Christoph Hellwig wrote: > On Tue, Jul 30, 2019 at 12:55:17PM +0000, Jason Gunthorpe wrote: > > I suspect this was added for the ODP conversion that does use both > > page sizes. I think the ODP code for this is kind of broken, but I > > haven't delved into that.. > > > > The challenge is that the driver needs to know
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
I'm currently hacking on CTDB's IP takeover/failover code. For Samba 4.6, I would like to rationalise the IP takeover-related tunable parameters. I would like to know if there are any users who set the values of these tunables to non-default values. The tunables in question are: DisableIPFailover Default: 0 When set to non-zero, ctdb will not perform failover or
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Hi Wei, I ran into the following problem when trying to boot another guest after less than a day of uptime. (the system started 15 guests at boot already which went fine). dom0 is allocated a fixed 1536M. Both host as pv guests run the same kernel, some hvm''s run a slightly older kernel (3.9 f.e.) The are quite some granttable messages in xl dmesg, i also included these and a
2006 Jun 05
3
Swap: typical rehash. Why?
I can't resist. Read the thread that was pointed to on lkml. ROTFLMAO. *Real* UNIX addressed these problems long ago. I guess the "Gurus" suffer from NIH (Not Invented Here) syndrome. Given a "general purpose" system, tunability is a must. UNIX, as delivered by USL in such examples as Sys V, had tunables that let admins tune to their needs. A single "swappiness"
2009 Apr 09
1
[Bridge] Out of memory problem
Hi, I'm using linux 2.6.21.5 and our kernel is freeze. The problem is, if I create a Software bridge using $brctl command. and add two interfaces say, eth0.0 and eth0.1 using $brctl addbr br-lan $brctl addif br-lan eth0.0 $brctl addif br-lan eth0.1 and when i send traffic from a host connected to one port to host connected at other end, soon all the memory is dried up and and kernel
2003 Sep 09
0
CAM/INVARIANTS fix committed
I have committed a fix for the panic that happened with the da(4) or cd(4) drivers configured and INVARIANTS turned on. Let me know if there are any problems/comments/questions. Ken ----- Forwarded message from "Kenneth D. Merry" <ken@FreeBSD.org> ----- From: "Kenneth D. Merry" <ken@FreeBSD.org> Date: Tue, 9 Sep 2003 17:40:40 -0700 (PDT) To:
2017 Feb 11
0
Managesieve cannot access script store
OK, I've figured it out: In the dovecot profile for apparmor the sieve directory is not confgured. I solved it this way: To configure only one directory in the apparmor profile, I placed the active-script link inside the .sieve directory. Keeping the scripts separate in a store subdirectory, like this: In /etc/dovecot/conf.d/90-sieve.conf : sieve =
2010 Apr 05
0
Why does ARC grow above hard limit?
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it happening? mdb''s ::memstat reports 1.5 GB used - does this include ARC as well or is
2008 Feb 21
3
Reclaiming transmit descriptors by NIC drivers with Crossbow new scheduling
The following is mainly a capture of parts of multiple off-line discussions within members of the Crossbow team (Gopi, Thiru, Roamer, May-Lin, Thirumailai, Nitin, KB, ...), I thought I''d open it up to other participants. Crossbow''s core scheduling involves switching a NIC (or individual Rx rings on the NIC) to polling mode. The receive interrupt will become not only rarer,
2017 Jan 05
3
"[Announce] Samba 4.6.0rc1 Available for Download"
Release Announcements ===================== This is the first preview release of Samba 4.6. This is *not* intended for production environments and is designed for testing purposes only. Please report any defects via the Samba bug reporting system at https://bugzilla.samba.org/. Samba 4.6 will be the next version of the Samba suite. UPGRADING ========= vfs_fruit option
2017 Jan 05
3
"[Announce] Samba 4.6.0rc1 Available for Download"
Release Announcements ===================== This is the first preview release of Samba 4.6. This is *not* intended for production environments and is designed for testing purposes only. Please report any defects via the Samba bug reporting system at https://bugzilla.samba.org/. Samba 4.6 will be the next version of the Samba suite. UPGRADING ========= vfs_fruit option
2019 Jul 30
0
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 12:55:17PM +0000, Jason Gunthorpe wrote: > I suspect this was added for the ODP conversion that does use both > page sizes. I think the ODP code for this is kind of broken, but I > haven't delved into that.. > > The challenge is that the driver needs to know what page size to > configure the hardware before it does any range stuff. > > The
2013 Oct 30
0
[Announce] CTDB 2.5 available for download
Changes in CTDB 2.5 =================== User-visible changes -------------------- * The default location of the ctdbd socket is now: /var/run/ctdb/ctdbd.socket If you currently set CTDB_SOCKET in configuration then unsetting it will probably do what you want. * The default location of CTDB TDB databases is now: /var/lib/ctdb If you only set CTDB_DBDIR (to the old default of
2009 Apr 24
3
extend raid volume - new drive
Hi there, I have a system with the following: # fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9471 75971385 83 Linux /dev/sda3