similar to: ARC Summary

Displaying 20 results from an estimated 100 matches similar to: "ARC Summary"

2010 Apr 05
0
Why does ARC grow above hard limit?
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it happening? mdb''s ::memstat reports 1.5 GB used - does this include ARC as well or is
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated. My current setup is : OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing) iSCSI Storage Array that is capable of 20 MB/s random writes @ 4k and 70 MB random reads
2010 Mar 21
1
arc_summary.pl results
Was wondering if anyone can see any issues with the ARC in the following output? bash-3.00# ./arc_summary.pl System Memory: Physical RAM: 6023 MB Free Memory : 784 MB LotsFree: 90 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 1159 MB (arcsize) Target Size (Adaptive): 2106 MB (c) Min Size (Hard Limit): 624 MB (zfs_arc_min) Max Size (Hard Limit): 4999 MB (zfs_arc_max) ARC Size
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd. According to our tester, Oracle writes are extremely slow (high latency). Below is a snippet of iostat: r/s w/s
2011 Apr 25
3
arcstat updates
Hi ZFSers, I''ve been working on merging the Joyent arcstat enhancements with some of my own and am now to the point where it is time to broaden the requirements gathering. The result is to be merged into the illumos tree. arcstat is a perl script to show the value of ARC kstats as they change over time. This is similar to the ideas behind mpstat, iostat, vmstat, and friends. The current
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but
2015 Sep 24
0
FreeBSD 10 & default_vsz_limit causing reboots?
Quoting Timo Sirainen <tss at iki.fi>: > On 24 Sep 2015, at 16:26, Rick Romero <rick at havokmon.com> wrote: >> Update.? Only a single reboot has occurred since changing >> defalt_vsz_limit from 384M to 512M.? It would seem that something the >> users are doing is causing that virtual memory size to be exceeded >> (possibly a mailbox search?), and when that
2010 Jul 24
0
ARC/VM question
I have a semi-theoretical question about the following code in arc.c, arc_reclaim_needed() function: /* * take ''desfree'' extra pages, so we reclaim sooner, rather than later */ extra = desfree; /* * check that we''re out of range of the pageout scanner. It starts to * schedule paging if freemem is less than lotsfree and needfree. * lotsfree is the high-water mark
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote: > I''d love to "vote" to have this addressed, but apparently votes for > bugs are no available to outsiders. > > What''s limiting Stanford EE''s move to using ZFS entirely for our > snapshoting filesystems and multi-tier storage is the inability to > access .zfs directories and snapshots in particular on NFSv3 clients.
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
Hi there We have a pair of servers running FreeBSD 9.1-RC3 that act as transparent layer 7 loadbalancer (relayd) and pop/imap proxy (dovecot). Only one of them is active at a given time, it's a failover setup. From time to time the active one gets in a state in which the 'thread taskq' thread uses up 100% of one cpu on its own, like here: ---- PID USERNAME PRI NICE SIZE
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs on a single notebook hdd p-ata drive). After upgrading to 2007-05-25 opensolaris release bits
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2012 Dec 24
3
vif-route issue with HVM domU only
Hi, I seem to have an interesting issue with vif-route. This is after an update to Xen 4.2.1, switching from xm to xl. I have 10 PV domUs on the host and two FreeBSD ones. All the PV domUs are now working nicely. Since FreeBSD has always been just slightly broken as PV I chose a HVM domU for those, but with PV drivers. Those PV drivers all blew up now after the upgrade. I''m now trying
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am running Sol11Expr on this host and I use it to primarily serve Netatalk AFP shares. From day one, I have noticed that the amount of free RAM decereased and along with that decrease the overall performance of ZFS decreased as well. Now, since I am still quite a Solaris newbie, I seem to
2015 Sep 24
2
FreeBSD 10 & default_vsz_limit causing reboots?
On 24 Sep 2015, at 16:26, Rick Romero <rick at havokmon.com> wrote: > > Update. Only a single reboot has occurred since changing > defalt_vsz_limit from 384M to 512M. It would seem that something the > users are doing is causing that virtual memory size to be exceeded > (possibly a mailbox search?), and when that occurs Dovecot/FreeBSD is not > handling the event as
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2003 Dec 18
3
[Bug 776] Add Fedora information to build RPMs
http://bugzilla.mindrot.org/show_bug.cgi?id=776 Summary: Add Fedora information to build RPMs Product: Portable OpenSSH Version: -current Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: P2 Component: Build system AssignedTo: openssh-bugs at mindrot.org ReportedBy:
2008 Apr 08
4
ZFS deadlock
Hello A box of mine running RELENG_7_0 and ZFS over a couple of disks (6 disks, 3 mirrors) seems to have gotten stuck. From Ctrl-T: load: 0.50 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock] 0.02u 0.04s 0% 3404k load: 0.43 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock] 0.02u 0.04s 0% 3404k load: 0.10 cmd: zsh 40188 [zfs:&buf_hash_table.ht_locks[i].ht_lock]