search for: mfu

Displaying 20 results from an estimated 20 matches for "mfu".

Did you mean: mfn
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote: > I''d love to "vote" to have this addressed, but apparently votes for > bugs are no available to outsiders. > > What''s limiting Stanford EE''s move to using ZFS entirely for our > snapshoting filesystems and multi-tier storage is the inability to > access .zfs directories and snapshots in particular on NFSv3 clients.
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but I''d really like expert clarification before I start tweaking. Thanks. benr. This message posted from opensolaris.org
2010 Mar 21
1
arc_summary.pl results
...reakdown: Most Recently Used Cache Size: 49% 1034 MB (p) Most Frequently Used Cache Size: 50% 1072 MB (c-p) ARC Efficency: Cache Access Total: 59966414 Cache Hit Ratio: 99% 59929362 [Defined State for buffer] Cache Miss Ratio: 0% 37052 [Undefined State for Buffer] REAL Hit Ratio: 97% 58728625 [MRU/MFU Hits Only] Data Demand Efficiency: 99% Data Prefetch Efficiency: 40% CACHE HITS BY CACHE LIST: Anon: 2% 1198857 [ New Customer, First Cache Hit ] Most Recently Used: 2% 1435141 (mru) [ Return Customer ] Most Frequently Used: 95% 57293484 (mfu) [ Frequent Customer ] Most Recently Used Ghost: 0% 20...
2010 Apr 30
0
ARC Summary
...e Breakdown: Most Recently Used Cache Size: 49% 5008 MB (p) Most Frequently Used Cache Size: 50% 5009 MB (c-p) ARC Efficency: Cache Access Total: 5630506 Cache Hit Ratio: 98% 5564770 [Defined State for buffer] Cache Miss Ratio: 1% 65736 [Undefined State for Buffer] REAL Hit Ratio: 74% 4222245 [MRU/MFU Hits Only] Data Demand Efficiency: 98% Data Prefetch Efficiency: 23% CACHE HITS BY CACHE LIST: Anon: 24% 1342485 [ New Customer, First Cache Hit ] Most Recently Used: 7% 396106 (mru) [ Return Customer ] Most Frequently Used: 68% 3826139 (mfu) [ Frequent Customer ] Most Recently Used Ghost: 0% 16...
2010 Apr 05
0
Why does ARC grow above hard limit?
...c-p) ARC Efficency: Cache Access Total: 51681761 Cache Hit Ratio: 52% 27056475 [Defined State for buffer] Cache Miss Ratio: 47% 24625286 [Undefined State for Buffer] REAL Hit Ratio: 52% 27056475 [MRU/MFU Hits Only] Data Demand Efficiency: 35% Data Prefetch Efficiency: DISABLED (zfs_prefetch_disable) CACHE HITS BY CACHE LIST: Anon: --% Counter Rolled. Most Recently Used: 13% 3627289 (mru) [ R...
2010 Apr 02
0
ZFS behavior under limited resources
...c-p) ARC Efficency: Cache Access Total: 47002757 Cache Hit Ratio: 52% 24657634 [Defined State for buffer] Cache Miss Ratio: 47% 22345123 [Undefined State for Buffer] REAL Hit Ratio: 52% 24657634 [MRU/MFU Hits Only] Data Demand Efficiency: 36% Data Prefetch Efficiency: DISABLED (zfs_prefetch_disable) CACHE HITS BY CACHE LIST: Anon: --% Counter Rolled. Most Recently Used: 13% 3420349 (mru) [ R...
2011 Apr 25
3
arcstat updates
...ug : MRU ghost list hits per second l2hit% : L2ARC access hit percentage mh% : Metadata hit percentage l2miss% : L2ARC access miss percentage read : Total ARC accesses per second l2hsz : L2ARC header size c : ARC target size mfug : MFU ghost list hits per second miss : ARC misses per second dm% : Demand data miss percentage hsz : ARC header size dhit : Demand data hits per second pread : Prefetch accesses per second dread : Demand data accesses per second...
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
...? ? 0.2? 0.0? 2.7? ? > 0.0???12.9???0? 88 > c1t12d0 > >? ? 90.4???41.3? ? > 1.0? ? 4.0? 0.0? 0.2? ? > 0.0? ? 1.2???0???6 > c1t13d0 > >? > ???0.0???24.3? ? > 0.0? ? 1.2? 0.0? 0.0? ? > 0.0? ? 0.2???0???0 > c1t14d0 > > > > > > Is it true if your MFU stats start to go over 50% then > more memory is needed? > >? ? ? ???CACHE HITS BY > CACHE LIST: > >? ? ? ? > ???Anon:? ? ? ? ? > ? ? ? ? ? > ???10%? ? ? ? > 74845266? ? ? ? ? ? > ???[ New Customer, First Cache Hit ] > >? ? ? ? ???Most > Recently Used:? ? ?...
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8
2009 Dec 24
1
high read iops - more memory for arc?
...d0 126.2 72.6 1.3 0.2 0.0 2.8 0.0 14.2 0 89 c1t11d0 129.7 81.0 1.4 0.2 0.0 2.7 0.0 12.9 0 88 c1t12d0 90.4 41.3 1.0 4.0 0.0 0.2 0.0 1.2 0 6 c1t13d0 0.0 24.3 0.0 1.2 0.0 0.0 0.0 0.2 0 0 c1t14d0 Is it true if your MFU stats start to go over 50% then more memory is needed? CACHE HITS BY CACHE LIST: Anon: 10% 74845266 [ New Customer, First Cache Hit ] Most Recently Used: 19% 140478087 (mru) [ Return Customer ] Mo...
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs on a single notebook hdd p-ata drive). After upgrading to 2007-05-25 opensolaris release bits
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2015 Sep 24
0
FreeBSD 10 & default_vsz_limit causing reboots?
...9,? 0.32?????????????????????????????????????????????????????????? up 7+03:02:34? 14:27:59 1265 processes:1 running, 1264 sleeping CPU:? 2.6% user,? 0.0% nice,? 1.4% system,? 0.2% interrupt, 95.9% idle Mem: 3326M Active, 2210M Inact, 25G Wired, 8828K Cache, 1655M Buf, 1000M Free ARC: 20G Total, 14G MFU, 4646M MRU, 3845K Anon, 621M Header, 1216M Other Swap: 4096M Total, 4096M Free Now, it's entirely possible that the user(s) who were eating all my server resources stopped using the system at the same time I increased the vsz limit, but that seems unlikely. I'm leaning towards a FreeBSD i...
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am running Sol11Expr on this host and I use it to primarily serve Netatalk AFP shares. From day one, I have noticed that the amount of free RAM decereased and along with that decrease the overall performance of ZFS decreased as well. Now, since I am still quite a Solaris newbie, I seem to
2015 Sep 24
2
FreeBSD 10 & default_vsz_limit causing reboots?
On 24 Sep 2015, at 16:26, Rick Romero <rick at havokmon.com> wrote: > > Update. Only a single reboot has occurred since changing > defalt_vsz_limit from 384M to 512M. It would seem that something the > users are doing is causing that virtual memory size to be exceeded > (possibly a mailbox search?), and when that occurs Dovecot/FreeBSD is not > handling the event as
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
..., 57.1% system, 0.0% interrupt, 32.3% idle CPU 22: 5.9% user, 0.0% nice, 58.8% system, 0.0% interrupt, 35.3% idle CPU 23: 6.3% user, 0.0% nice, 59.6% system, 0.0% interrupt, 34.1% idle Mem: 3551M Active, 1351M Inact, 2905M Wired, 8K Cache, 7488K Buf, 85G Free ARC: 1073M Total, 107M MRU, 828M MFU, 784K Anon, 7647K Header, 130M Other Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 24 155 ki31 0K 384K CPU23 23 431.4H 847.95% idle 0 root 248 -8 0 0K 3968K - 1 10:24 89.45% k...
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
...TS>tiV<eW>XTw;Rx5}GM&&rDHUg0ogE|x9DW_v(y at y-BF&4p- z at 16Nlug2wI4YzE8esAk=wi3qa;LzDYn-HPQrUQFkeGI<v$&tZjac#Ku*oMaL%_g=M zkpd4v(<z-G7qY1}V;S;Zc84moXVxg-8!qa=LG~+zuOr7 at jRu8<oK-5qf;x(gnT19H zI2sb?c$*6<<XMw{#H!HGn-i1ozTP<#KYmYJ7Qhag18cfw{YEx<y=R!5?$cH#mFu at H z0h=cZJP*onHDS6`FNy|69XP1LOnlC=md(p`z9*biI&oT+1zbB#dx$LG27AOM--m at f zgSt4<>g+x-hgZXg)z+g1YLpPmQT6~q$bi&^-d1Q!^*ia=&W7KWJr&EBaWd3k&q6 at 3 zpLLm&ZW5Vx12lLNI8vpss>}w<I at m=ro~+Ka5Ho6k#>)bV^p>2^C3GfMj;)2`oO`zd z_FUwIQL|xu*`WI;gGkPejGyVRzE$=L=Hz?7a>R...
2008 Jun 30
4
Rebuild of kernel 2.6.9-67.0.20.EL failure
Hello list. I'm trying to rebuild the 2.6.9.67.0.20.EL kernel, but it fails even without modifications. How did I try it? Created a (non-root) build environment (not a mock ) Installed the kernel.scr.rpm and did a rpmbuild -ba --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee prep-out.log The build failed at the end: Processing files: kernel-xenU-devel-2.6.9-67.0.20.EL Checking