similar to: Re: [Linux-HA] OCFS2 - Memory hog?

Displaying 20 results from an estimated 600 matches similar to: "Re: [Linux-HA] OCFS2 - Memory hog?"

2011 Sep 01
1
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2006 Apr 09
0
Slab memory usage on dom0 increases by 128MB/day
Hello. I''m running Xen 3.0.1 on the following hardware: Dell SC1425 Rack Server 1x Intel Xeon 2.8GHz (64bit on 32bit OS/Xen) Hyper-threading enabled 2GB memory 80+250GB SATA hard drives (sda, sdb) Debian Sarge on dom0. Different Debian versions on virtual servers. This is running as a virtualized web server. It''s hosting four virtual servers. 256MB memory is reserved to dom0
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space
2011 Sep 01
0
No buffer space available - loses network connectivity
Hi, I have a centos 5.6 xen vps which loses network connectivity once in a while with following error. ========================================= -bash-3.2# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available ping: sendmsg: No buffer space available
2007 Aug 05
3
OOM killer observed during heavy I/O from VMs (XEN 3.0.4 and XEN 3.1)
Under both XEN 3.0.4 (2.6.16.33) and XEN 3.1 (2.6.18), I can make the OOM killer appear in dom0 of my server by doing heavy I/O from within a VM. If I start 5 VMs on the same server, each VM doing constant I/O over its boot disk (read/write a 2GB file), after about 30 minutes the OOM killer appears in dom0 and starts killing processes. This was observed using 256MB in dom0. If I bump the memory in
2019 Jul 30
1
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 03:14:30PM +0200, Christoph Hellwig wrote: > On Tue, Jul 30, 2019 at 12:55:17PM +0000, Jason Gunthorpe wrote: > > I suspect this was added for the ODP conversion that does use both > > page sizes. I think the ODP code for this is kind of broken, but I > > haven't delved into that.. > > > > The challenge is that the driver needs to know
2013 Nov 19
5
xenwatch: page allocation failure: order:4, mode:0x10c0d0 xen_netback:xenvif_alloc: Could not allocate netdev for vif16.0
Hi Wei, I ran into the following problem when trying to boot another guest after less than a day of uptime. (the system started 15 guests at boot already which went fine). dom0 is allocated a fixed 1536M. Both host as pv guests run the same kernel, some hvm''s run a slightly older kernel (3.9 f.e.) The are quite some granttable messages in xl dmesg, i also included these and a
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2019 Jul 30
2
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 08:51:57AM +0300, Christoph Hellwig wrote: > All users pass PAGE_SIZE here, and if we wanted to support single > entries for huge pages we should really just add a HMM_FAULT_HUGEPAGE > flag instead that uses the huge page size instead of having the > caller calculate that size once, just for the hmm code to verify it. I suspect this was added for the ODP
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a few days after 1.2.4 was released. I had reported on the mailing list that my developers were happy, and things seemed faster. However, twice in that time, the cluster has gone down due
2013 Apr 19
14
[GIT PULL] (xen) stable/for-jens-3.10
Hey Jens, Please in your spare time (if there is such a thing at a conference) pull this branch: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/for-jens-3.10 for your v3.10 branch. Sorry for being so late with this. <blurb> It has the ''feature-max-indirect-segments'' implemented in both backend and frontend. The current problem with the backend and
2019 Jul 30
0
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 12:55:17PM +0000, Jason Gunthorpe wrote: > I suspect this was added for the ODP conversion that does use both > page sizes. I think the ODP code for this is kind of broken, but I > haven't delved into that.. > > The challenge is that the driver needs to know what page size to > configure the hardware before it does any range stuff. > > The
2013 Nov 11
0
[GIT PULL] (xen) stable/for-linus-3.13-rc0-tag
Hey Linus, Please git pull the following tag: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.13-rc0-tag which has tons of fixes and two major features which are concentrated around the Xen SWIOTLB library. The short <blurb> is that the tracing facility (just one function) has been added to SWIOTLB to make it easier to track I/O progress. Additionally under
2004 Apr 19
16
Firewall sizing guidelines?
I have just completed the installation of a new firewall running Shorewall 1.4 on Mandrake 9.2 for our campus network. It appears to be running fairly well so far, but is generating significantly more log entries than our previous linux 2.0.x firewall... Our previous firewall enjoyed more than 6 years of 24/7 operation with no downtime before we finally decided it needed more horsepower, and
2004 Oct 07
1
kmem_cache_destroy: Can't free all objects
Hello! I am writing a FS filter that will be above the ext3 filesystem. For my own purposes I need severl bytes in inode. There are not enough space current inode so I need to create my own inode functions alloc_inode() & destroy_inode(). The problem causes when destroying slab cache at removing my module (rmmod). kmem_cache_destroy: Can't free all objects What I do: - install my
2004 Dec 24
0
dst cache overflow in 2.6.8
There appears to be a pretty serious router bug in kernel 2.6.8. One reference to it is here: http://www.debiantalk.com/_Bug279666_kernel-image-2_6_8-1-k7_Runs_out_of_network_buffers-10116882-5788-a.html and a followup that it may now be fixed in later kernels here: http://lists.debian.org/debian-kernel/2004/12/msg00233.html. This is my personal experience with it.... My router fails few
2018 Jul 27
7
Finding memory usage
I have a CentOS 7 server that is running out of memory and I can't figure out why. Running "free -h" gives me this: ????????????? total??????? used??????? free????? shared? buff/cache?? available Mem:?????????? 3.4G??????? 2.4G??????? 123M??????? 5.9M??????? 928M??????? 626M Swap:????????? 1.9G??????? 294M??????? 1.6G The problem is that I can't find 2.4G of usage.? If I look
2002 Feb 14
1
[BUG] [PATCH]: handling bad inodes in 2.4.x kernels
hi folks, i already posted this to the kernel mailing list a few days ago but nobody there seems to be interested in what i found out. since i believe this is a serious bug, i'm posting my perception again... the bug is about the handling of bad inodes in at least the 2.4.16, .17, .18-pre9 and .9 kernel releases (i suspect all 2.4 kernels are affected) and causes the names_cache to get
2005 Oct 18
4
dom0 oom-killer: gfp_mask=0x1d
I had the dom0 which unfortunately didn''t have a console on it hit a race condition and saw oom errors on it also. This happened after it was running for over 36 hours with a domU whose load average was avg was around 3 most of the time. changeset: 7396:9b51e7637676 Dom0 - UP i686, Centos 4.1, 768 megs domU-1 92 megs snmpd domU-2 92 megs snmpdd domU-3 410 megs postgres, tomcat 5.5,
2011 Nov 10
13
dom0 - oom-killer - memory leak somewhere ?
Hello, I work in a hosting company, we have tens of Xen dom0 running just fine, but unfortunately we do have a few that get out of control. Reported behaviour : - dom0 uses more and more memory - no process can be found using that memory - at some point, oom killer kicks in, and kills everything, until even ssh the box becomes hard - when there is really no more process to kill, it crashes