Displaying 13 results from an estimated 13 matches for "slabtop".
Did you mean:
labtop
2007 Feb 23
2
OCFS 1.2.4 memory problems still?
I have a 2 node cluster of HP DL380G4s. These machines are attached via
scsi to an external HP disk enclosure. They run 32bit RH AS 4.0 and
OCFS 1.2.4, the latest release. They were upgraded from 1.2.3 only a
few days after 1.2.4 was released. I had reported on the mailing list
that my developers were happy, and things seemed faster. However, twice
in that time, the cluster has gone down due
2017 Dec 19
5
[Bug 104340] New: Memory leak with GEM objects
...inux-4.15 as of a few
minutes ago), and there appears to be a memory leak with nouveau that eats up
about .5 GB every day. Killing X does nothing.
Attached is output of dmesg from the most recent boot of this machine. I've
also attached (in case it matters) dumps of /proc/meminfo and output of slabtop
after I ran X for about two weeks, then exited and killed all major daemons.
More importantly, I've attached a few hundred lines of
/sys/kernel/debug/kmemleak (from the most recent boot). After the first few
entries, they all start looking the same (except for the address and age).
--
You ar...
2007 Feb 19
10
"dst cache overflow" messages and crash
Hi,
I regularly have errors (kernel: dst cache overflow) and crash of a
firewall under Linux 2.6.17 and the route patch from Julian Anastasov.
With rtstat I see that the route cache size increases regularly without
never decreasing.
I have this parameters:
fw:/proc/sys/net/ipv4/route# grep . *
error_burst:1250
error_cost:250
gc_elasticity:15
gc_interval:60
gc_min_interval:0
2018 Jun 13
2
C 7: smpboot: CPU 16 is now offline
Current kernel, and I just booted, and dmesg shows, of the 32 cores, 0, 2,
4 and 6 ok, and *all* other show "is now offline.
What's happening here?
mark
2018 Jun 13
0
C 7: smpboot: CPU 16 is now offline, and slabs...
...9;s happening here?
>
A followup: I also find a core in /var/spool/abrt, and "reason" is
kernel BUG at mm/slub.c:3601!
In googling, I see threads about incorrect calculation of slabs. Following
one thread, I find
cat /sys/kernel/slab/:t-0000048/cpu_slabs
gives me
4 N0=4
Meanwhile, slabtop shows
Active / Total Slabs (% used) : 25927 / 25927 (100.0%)
Which changes, but just varying around that number, and st 100%.
So: should I increase the number of slabs, using the kernel parm of
swiotlb, and if so, for what I show above, should I set it to, say, 32000?
mark
2008 Apr 02
2
Memory use and a few other things
Hello,
First of all thanks a lot for working on this free nvidia driver.
1.) I am using Fedora9-Beta, and on my NV17/64mb (or NV18?) powered
Laptop the nouveau driver uses about 300mb RAM. Is there any way I can
decrease that?
2.) Font rendering is very slow. running "x11perf --aa10text" I only
get 40k glyphs/s.
Is there any way I can improve text performance?
3.) I would like to
2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all !
I have problems with concurrent filesystem actions on a ocfs2
filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6
F.e.: If I have a LV called testlv which is mounted on /mnt on both
servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024
count=1000000" on server 1 and do at the same time a du -hs
/mnt/test.a it takes about 5 seconds for du -hs to execute:
270M
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton:
http://www.linux-mag.com/id/7839
anyone have views on whether this sort of caching would be useful for
the MDT? My feeling is that MDT reads are probably pretty random but
writes might benefit...?
GREG
--
Greg Matthews 01235 778658
Senior Computer Systems Administrator
Diamond Light Source, Oxfordshire, UK
2010 Apr 22
1
Odd behavior
Hi Y'all,
I'm seeing some interesting behavior that I was hoping someone could
shed some light on. Basically I'm trying to rsync a lot of files, in a
series of about 60 rsyncs, from one server to another. There are about
160 million files. I'm running 3 rsyncs concurrently to increase the
speed, and as each one finishes, another starts, until all 60 are done.
The machine
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 5...
2006 Jun 24
8
How to install programs in wine?
I am a rank newbie to Linux and wine.
I am running Ubuntu Dapper on an AMD 1800 mhz machine, wine 0.9.15
Everything I have read says use the installer to load windows programs.
Where is the installer?
Thanks,
--
Ron Thompson On the Beautiful Florida Space Coast, right beside the Kennedy Space Center, USA
http://www.plansandprojects.com My hobby pages are here:
2018 Sep 14
3
NUMA issues on virtualized hosts
Hello,
I have cluster with AMD EPYC 7351 cpu. Two CPUs per node. I have performance
8-NUMA configuration:
This is from hypervizor:
[root@hde10 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a
lustre file system will cause a significant system overhead for applications
with high memory demands. We have seen a 50% slowdown or worse for
applications. Even High Performance Linpack, that have no file IO whatsoever
is affected. The only remedy seems to be to empty the buffer cache from memory
by running