similar to: ZFS behavior under limited resources

Displaying 20 results from an estimated 200 matches similar to: "ZFS behavior under limited resources"

2010 Apr 05
0
Why does ARC grow above hard limit?
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it happening? mdb''s ::memstat reports 1.5 GB used - does this include ARC as well or is
2010 Apr 30
0
ARC Summary
Was wondering if anyone of you see any issues with the following in Solaris 10 u8 ZFS? System Memory: Physical RAM: 11042 MB Free Memory : 5250 MB LotsFree: 168 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 4309 MB (arcsize) Target Size (Adaptive): 10018 MB (c) Min Size (Hard Limit): 1252 MB (zfs_arc_min) Max Size (Hard Limit): 10018 MB (zfs_arc_max) ARC Size Breakdown: Most Recently
2010 Mar 21
1
arc_summary.pl results
Was wondering if anyone can see any issues with the ARC in the following output? bash-3.00# ./arc_summary.pl System Memory: Physical RAM: 6023 MB Free Memory : 784 MB LotsFree: 90 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 1159 MB (arcsize) Target Size (Adaptive): 2106 MB (c) Min Size (Hard Limit): 624 MB (zfs_arc_min) Max Size (Hard Limit): 4999 MB (zfs_arc_max) ARC Size
2007 Jan 23
0
Understanding ::memstat in terms of the ARC
Hello all, I have a question. Below are two ::memstat outputs about 5 days apart. The interesting thing is the "anonymous" memory shows 2GB, though the two major hogs of that memory (two MySQL instances) claim to be consuming about 6.2GB (checked via pmap). Also, it seems like the ARC keeps creeping the kernel memory over the 4GB limit I set for the ARC (zfs_arc_max). What I was also,
2009 Nov 18
0
open(2), but no I/O to large files creates performance hit
I''m seeing a performance anomaly where opening a large file (but doing *no* I/O to it) seems to cause (or correlates to) a significant performance hit on a mirrored ZFS filesystem. Unintuitively, if I disable zfs_prefetch_disable, I don''t see the performance degradation. It doesn''t make sense that this would help unless there is some cache/VM pollution resulting
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but
2007 Nov 08
5
mdb ::memstat including zfs buffer details?
Hey all - Just a quick one... Is there any plan to update the mdb ::memstat dcmd to present ZFS buffers as part of the summary? At present, we get something like: > ::memstat Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 28859 112 13% Anon 34230
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there. I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K. This server has 64GB of memory and will be running
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs on a single notebook hdd p-ata drive). After upgrading to 2007-05-25 opensolaris release bits
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2008 Mar 27
3
kernel memory and zfs
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones. I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point. root at servername:~/zonecfg #mdb -k Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen, I''m currently testing a new setup for a ZFS based storage system with dedup enabled. The system is setup on OI 148, which seems quite stable w/ dedup enabled (compared to the OpenSolaris snv_136 build I used before). One issue I ran into, however, is quite baffling: With iozone set to 32 threads, ZFS''s ARC seems to consume all available memory, making
2009 Dec 27
3
windows xp domU consuming 200% cpu (dom0 is quad-core) at idle
What have I done wrong? I installed an xp (service pack 2) domU using the virt-install example from the man page, on a zfs volume. I noticed this behavior on build 129, and I replicated the behavior on build 130: root@opensolaris:/tmp# virsh version Compiled against library: libvir 0.7.0 Using library: libvir 0.7.0 Using API: Xen 3.0.1 Running hypervisor: Xen 3.4 root@opensolaris:/tmp#
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am running Sol11Expr on this host and I use it to primarily serve Netatalk AFP shares. From day one, I have noticed that the amount of free RAM decereased and along with that decrease the overall performance of ZFS decreased as well. Now, since I am still quite a Solaris newbie, I seem to
2012 Jan 03
10
arc_no_grow is set to 1 and never set back to 0
Hello. I have a Solaris 11/11 x86 box (which I migrated from SolEx 11/10 a couple of weeks ago). Without no obvious reason (at least for me), after an uptime of 1 to 2 days (observed 3 times now) Solaris sets arc_no_grow to 1 and then never sets it back to 0. ARC is being shrunk to less than 1 GB -- needless to say that performance is terrible. There is not much load on this system. Memory
2001 Feb 01
0
browsing subnets over vpnd
Hey :) I have recently set up VPN links between 3 subnets. I can ping back and forth between all of the computers just fine, and samba works on all but one subnet. The problem is that I am unable to use the VPN to browse computers on the office network. I can browse them locally from the office, and I can browse other subnets from them (ie. at home, etc), but when I get on another subnet and try
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend. I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI, T5220, S10U4 + patches mdb -k > ::memstat While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows: CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0 So about 2mln xcalls per second. Let''s check with dtrace:
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8