search for: cachelist

Displaying 14 results from an estimated 14 matches for "cachelist".

Did you mean: cacheline
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
...--- Kernel 860254 3360 21% ZFS File Data 3047 11 0% Anon 38246 149 1% Exec and libs 3765 14 0% Page cache 8517 33 0% Free (cachelist) 5866 22 0% Free (freelist) 3272317 12782 78% Total 4192012 16375 Physical 4192011 16375 mem_inuse 4145901568 mem_total 107...
2007 Nov 08
5
mdb ::memstat including zfs buffer details?
...Tot ------------ ---------------- ---------------- ---- Kernel 28859 112 13% Anon 34230 133 15% Exec and libs 10305 40 5% Page cache 16876 65 8% Free (cachelist) 26145 102 12% Free (freelist) 105176 410 47% Balloon 0 0 0% Total 221591 865 Which just (as far as I can tell) includes the zfs buffers in Kernel memory. And what...
2007 Jan 23
0
Understanding ::memstat in terms of the ARC
...ory shows 2GB, though the two major hogs of that memory (two MySQL instances) claim to be consuming about 6.2GB (checked via pmap). Also, it seems like the ARC keeps creeping the kernel memory over the 4GB limit I set for the ARC (zfs_arc_max). What I was also, curious about, is if ZFS affects the cachelist line, or if that is just for UFS. Thank you in advance! Best Regards, Jason 01/17/2007 02:28:50 GMT 2007 Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1485925 5804 36% Anon...
2008 Mar 27
3
kernel memory and zfs
...ed at this point. root at servername:~/zonecfg #mdb -k Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm nfs ] ::memstat Page Summary Pages MB %Tot ---- Kernel 4108442 16048 49% Anon 3769634 14725 45% Exec and libs 9098 35 0% Page cache 29612 115 0% Free (cachelist) 99437 388 1% Free (freelist) 369040 1441 4% Total 8385263 32754 Physical 8176401 31939 Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find out how much of that kernel memory is due to ZFS? It just seems an excessively high amount of our memory is going to the kernel, ev...
2009 Oct 15
8
sub-optimal ZFS performance
...un: Kernel 162685 635 16% ZFS File Data 81284 317 8% Anon 57323 223 6% Exec and libs 3248 12 0% Page cache 14924 58 1% Free (cachelist) 7881 30 1% Free (freelist) 700315 2735 68% Total 1027660 4014 Physical 1027659 4014 memstat post first run: Page Summary Pages MB %Tot -----------...
2009 Jul 09
3
performance troubleshooting
...Tot ------------ ---------------- ---------------- ---- Kernel 1133252 4426 31% Anon 1956988 7644 53% Exec and libs 31104 121 1% Page cache 332818 1300 9% Free (cachelist) 77813 303 2% Free (freelist) 135815 530 4% Total 3667790 14327 Physical 3593201 14035 > </pre> <pre> sar -u 5 10: 18:06:58 %usr %sys %wio %idle 18:07:03...
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2010 Apr 05
0
Why does ARC grow above hard limit?
...--- Kernel 800895 3128 25% ZFS File Data 394450 1540 13% Anon 106813 417 3% Exec and libs 4178 16 0% Page cache 14333 55 0% Free (cachelist) 22996 89 1% Free (freelist) 1797511 7021 57% Total 3141176 12270 Physical 3141175 12270 --- DURING THE TEST # ~/bin/arc_summary.pl System Memory: Physical RAM: 12270 MB...
2010 Apr 02
0
ZFS behavior under limited resources
...--- Kernel 800933 3128 25% ZFS File Data 394450 1540 13% Anon 128909 503 4% Exec and libs 4172 16 0% Page cache 14749 57 0% Free (cachelist) 21884 85 1% Free (freelist) 1776079 6937 57% Total 3141176 12270 Physical 3141175 12270 ---------- System Memory: Physical RAM: 12270 MB Free Memory : 6966 MB...
2007 Apr 19
5
Available free memory.
Hi, Can I use DTrace to determine memory status? 1.Total Physical Memory, Used Memory. 2.Total Swap Space and Used Swap Space. I did find few DTrace scripts but had too much in them and I am unable to chop off unwanted lines of code due to lack of knowledge. It will be very helpful if some one can share the piece of code that serves my purpose as mentioned above. Regards, Ramesh. Ramesh
2006 Oct 05
0
Crash when doing rm -rf
...Tot ------------ ---------------- ---------------- ---- Kernel 41647 162 32% Anon 56673 221 44% Exec and libs 11331 44 9% Page cache 2963 11 2% Free (cachelist) 11554 45 9% Free (freelist) 4742 18 4% Total 128910 503 Physical 128909 503 > $C db95bae4 vpanic(f9f95858, d7a91388, d7a913b8, f9f94660, 0, 0) db95bc70 zio_done+0x122(...
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote: > I''d love to "vote" to have this addressed, but apparently votes for > bugs are no available to outsiders. > > What''s limiting Stanford EE''s move to using ZFS entirely for our > snapshoting filesystems and multi-tier storage is the inability to > access .zfs directories and snapshots in particular on NFSv3 clients.
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface