Displaying 20 results from an estimated 100 matches similar to: "Why does ARC grow above hard limit?"
2010 Apr 02
0
ZFS behavior under limited resources
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated.
My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/mutlipathing)
iSCSI Storage Array that is capable of
20 MB/s random writes @ 4k and 70 MB random reads
2010 Mar 21
1
arc_summary.pl results
Was wondering if anyone can see any issues with the ARC in the following
output?
bash-3.00# ./arc_summary.pl
System Memory:
Physical RAM: 6023 MB
Free Memory : 784 MB
LotsFree: 90 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 1159 MB (arcsize)
Target Size (Adaptive): 2106 MB (c)
Min Size (Hard Limit): 624 MB (zfs_arc_min)
Max Size (Hard Limit): 4999 MB (zfs_arc_max)
ARC Size
2010 Apr 30
0
ARC Summary
Was wondering if anyone of you see any issues with the following in Solaris
10 u8 ZFS?
System Memory:
Physical RAM: 11042 MB
Free Memory : 5250 MB
LotsFree: 168 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 4309 MB (arcsize)
Target Size (Adaptive): 10018 MB (c)
Min Size (Hard Limit): 1252 MB (zfs_arc_min)
Max Size (Hard Limit): 10018 MB (zfs_arc_max)
ARC Size Breakdown:
Most Recently
2009 Nov 18
0
open(2), but no I/O to large files creates performance hit
I''m seeing a performance anomaly where opening a large file (but doing
*no* I/O to it) seems to cause (or correlates to) a significant
performance hit on a mirrored ZFS filesystem. Unintuitively, if I
disable zfs_prefetch_disable, I don''t see the performance degradation.
It doesn''t make sense that this would help unless there is some cache/VM
pollution resulting
2008 Oct 02
1
Terrible performance when setting zfs_arc_max snv_98
Hi there.
I just got a new Adaptec RAID 51645 controller in because the old (other type) was malfunctioning. It is paired with 16 Seagate 15k5 disks, of which two are used with hardware RAID 1 for OpenSolaris snv_98, and the rest is configured as striped mirrors as a zpool. I created a zfs filesystem on this pool with a blocksize of 8K.
This server has 64GB of memory and will be running
2008 Aug 20
9
ARCSTAT Kstat Definitions
Would someone "in the know" be willing to write up (preferably blog) definitive definitions/explanations of all the arcstats provided via kstat? I''m struggling with proper interpretation of certain values, namely "p", "memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit counters. I think I''ve got it figured out, but
2007 May 29
6
Deterioration with zfs performace and recent zfs bits?
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs on a
single notebook hdd p-ata drive).
After upgrading to 2007-05-25 opensolaris release bits
2007 Jan 23
0
Understanding ::memstat in terms of the ARC
Hello all,
I have a question. Below are two ::memstat outputs about 5 days apart.
The interesting thing is the "anonymous" memory shows 2GB, though the
two major hogs of that memory (two MySQL instances) claim to be
consuming about 6.2GB (checked via pmap).
Also, it seems like the ARC keeps creeping the kernel memory over the
4GB limit I set for the ARC (zfs_arc_max). What I was also,
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2009 Dec 27
3
windows xp domU consuming 200% cpu (dom0 is quad-core) at idle
What have I done wrong? I installed an xp (service pack 2) domU using the virt-install example from the man page, on a zfs volume. I noticed this behavior on build 129, and I replicated the behavior on build 130:
root@opensolaris:/tmp# virsh version
Compiled against library: libvir 0.7.0
Using library: libvir 0.7.0
Using API: Xen 3.0.1
Running hypervisor: Xen 3.4
root@opensolaris:/tmp#
2011 Jan 12
6
ZFS slows down over a couple of days
Hi all,
I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has
32 GB RAM installed. I am running Sol11Expr on this host and I use it to
primarily serve Netatalk AFP shares. From day one, I have noticed that
the amount of free RAM decereased and along with that decrease the
overall performance of ZFS decreased as well.
Now, since I am still quite a Solaris newbie, I seem to
2010 Jul 24
0
ARC/VM question
I have a semi-theoretical question about the following code in arc.c,
arc_reclaim_needed() function:
/*
* take ''desfree'' extra pages, so we reclaim sooner, rather than later
*/
extra = desfree;
/*
* check that we''re out of range of the pageout scanner. It starts to
* schedule paging if freemem is less than lotsfree and needfree.
* lotsfree is the high-water mark
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello.
We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt
scsi buses, skge GigE network) as a NFS backend with ZFS for
distribution of free software like Debian (cdimage.debian.org,
ftp.se.debian.org) and have run into some performance issues.
We are running SX snv_48 and have run with a raidz2 with 7x300G for a
while now, just added another 7x300G raidz2 today but
2008 Oct 24
3
more smbd CPU mystery
Well I have determined that everytime someone logs in/logs out
of a windows box in our lab *ALL* of the files in "My Directory"
are copied from/to the file server to the local client. Needless to
say this is retarded and needs to stop. The local sys admin needs
to perform some windows voodoo to redirect this directory.
Still this brings the mystery as to why smbd would take up so
much
2008 May 26
2
SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
Hi All,
I am running nevada 79 BFU''ed to 82. The machine is a Ultra 20 with 4GB
memory. I have several Windows XP domU''s configured and registered.
When ever I try to start the fourth domain I get an out of memory exception:
Not enough memory is available, and dom0 cannot be shrunk any further
Each of my domains only uses 256 so I thought there would be sufficient
memory
2008 Jan 28
5
XEN - ZFS
Hello,
I read, and know, that ZFS + XEN are not works well together. (I''ve only
512MB for Dom0).
How I can disable/off all ZFS stuff to have more usable memory to Dom0.
Regards
Maciej
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_min....Anyway, I set c_max to 1GB.
After a workload run....:
> arc::print -tad
{
. . .
ffffffffc02e29e8
2009 Apr 09
1
Re: Basic getting-started help point
> http://blogs.sun.com/ptelles/entry/sun_xvm_hypervisor_
> part_i
Hello, I can''t get this to work. My pc just reboots after "Syncing filesystem" when I try to boot into xVM Hypervisor. I have a clean install of OpenSolaris 2008.11.
--
This message posted from opensolaris.org
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?