search for: primarycach

Displaying 20 results from an estimated 27 matches for "primarycach".

Did you mean: primarycache
2008 Jun 24
1
zfs primarycache and secondarycache properties
...ate from the case. Eric kustarz wrote: > > On Jun 23, 2008, at 1:20 PM, Darren Reed wrote: > >> eric kustarz wrote: >>> >>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote: >>> >>>> Tim Haley wrote: >>>>> .... >>>>> primarycache=all | none | metadata >>>>> >>>>> Controls what is cached in the primary cache (ARC). If set to >>>>> "all", then both user data and metadata is cached. If set to >>>>> "none", then neither user data nor...
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to abandon zfs on local discs in x64 Solaris 10 5/08. The situation was that: * DB2 buffer pools occupied up to 90% of 32GB RAM on each host * DB2 cached the entire database in its buffer pools o having the file system repeat this was not helpful * running high-load DB2 tests for 2 weeks showed 100%
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
...atably) during one of the READ tests, usually in iozone''s random read test. The WRITE tests work perfectly fine (and blazingly fast). The system has 12G RAM. It does not matter whether a reasonably fast L2ARC is added to the pool. The only thing that seems to help is setting ''primarycache=metadata'' for the ZFS in question, which does not seem like a desirable configuration. I have also tried setting the zfs_arc_max property to 0x80000000 (2G) in /etc/system. This helps a little (it won''t lock up quite as quickly), but at some point, it will still eat up all memo...
2012 Dec 01
3
6Tb Database with ZFS
Hello, Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I want to set arc_max parameter so ZFS cant use all my system''s memory, but i dont know how much i should set, do you think 24Gb will be enough for a 6Tb database? obviously the more the better but i cant set too much memory. Have someone implemented succesfully something similar? We ran some test and the
2013 May 09
4
recommended memory for zfs
Hello zfs question about memory. I heard zfs is very ram hungry. Service looking to run: - nginx - postgres - php-fpm - python I have a machine with two quad core cpus but only 4 G Memory I'm looking to buy more ram now. What would be the recommend amount of memory for zfs across 6 drives on this setup? Also can 9.1 now boot to zfs from the installer? (no tricks for post install) Thanks
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
...sers nbmand on received remotepool/users sharesmb name=users,guestok=true received remotepool/users refquota none default remotepool/users refreservation none default remotepool/users primarycache all default remotepool/users secondarycache all default remotepool/users usedbysnapshots 0 - remotepool/users usedbydataset 9.06G - remotepool/users usedbychildren 0...
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
...fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms). I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made a difference. I''m trying to understand why we''re seeing getting a lot of read requests to disk when the arc is set to 8GB and we have a 32GB ssd l2arc. With so much read requests hitting disks it...
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship between ZFS, the ARC, and the page cache. We have an application that''s a quasi-database. It reads files by mmap()ing them. (writes are done via write()). We''re talking 100TB of data in files that are 100k->50G in size (the files have headers to tell the app what segment to map, so mapped chunks
2011 Aug 11
6
unable to mount zfs file system..pl help
...off default pool1/fs1 nbmand off default pool1/fs1 sharesmb off default pool1/fs1 refquota none default pool1/fs1 refreservation none default pool1/fs1 primarycache all default pool1/fs1 secondarycache all default pool1/fs1 usedbysnapshots 0 - pool1/fs1 usedbydataset 21K - pool1/fs1 usedbychildren 0 - pool1/fs1 usedbyref...
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
...data/admin/ENS nbmand off default fsdata/admin/ENS sharesmb off default fsdata/admin/ENS refquota none default fsdata/admin/ENS refreservation none default fsdata/admin/ENS primarycache all default fsdata/admin/ENS secondarycache all default fsdata/admin/ENS usedbysnapshots 0 - fsdata/admin/ENS usedbydataset 73.6G - fsdata/admin/ENS usedbychildren 0...
2013 Mar 06
0
where is the free space?
...fault tank/lxc/tipper/brick1 sharesmb off default tank/lxc/tipper/brick1 refquota none default tank/lxc/tipper/brick1 refreservation none default tank/lxc/tipper/brick1 primarycache all default tank/lxc/tipper/brick1 secondarycache all default tank/lxc/tipper/brick1 usedbysnapshots 0 - tank/lxc/tipper/brick1 usedbydataset 16.4G...
2010 Apr 02
0
ZFS behavior under limited resources
...ll). The tests are run within the same directory. Test 1: Random writes @ 4k to 1000 1MB files (1000 threads, 1 per file). First I observe that ARC size grows (momentarily) above 512 MB limit (via kstat and arcstat.pl). Q: It seems that zfs:zfs_arc_max is not really a hard limit? I tried setting primarycache to none, metadata and all. The I/O reported is similar in the NONE and METADATA case (17 MB/S) while when set to ALL, I/O is 3 - 4 time less (4-5 MB/S). Q: Any explanation would be useful. In this test I observe for backend on average I/O is 132 MB/s for READs and 51 MB/s WRITES Q: Why is more re...
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
...off default rpool sharesmb off default rpool refquota none default rpool refreservation none default rpool primarycache all default rpool secondarycache all default rpool usedbysnapshots 0 - rpool usedbydataset 81K - rpool usedbyc...
2011 Jun 24
13
Fixing txg commit frequency
...ing method: # streaming clients pool load [%] 15 8% 20 11% 40 22% 60 33% 80 44% --- around here txg timeouts start to shorten --- 85 60% 90 70% 95 85% My application does a fair bit of caching and prefetching, so I have zfetch disabled and primarycache set to only metadata. Also, reads happen (on a per client basis) relatively infrequently, so I can easily take it if the pool stops reading for a few seconds and just writes data. The problem is, ZFS starts alternating between reads and writes really quickly, which in turn starves me on IOPS and r...
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
...default zhome/username nbmand off default zhome/username sharesmb off default zhome/username refquota none default zhome/username refreservation none default zhome/username primarycache all default zhome/username secondarycache all default zhome/username usedbysnapshots 0 - zhome/username usedbydataset 750M - zhome/username usedbychildren 0 -...
2012 Nov 20
6
zvol wrapped in a vmdk by Virtual Box and double writes?
...es. Anyone have any thoughts on what might be happening here? I can appreciate that if everything comes through as a sync write, it goes to the ZIL first, then to it''s final resting place - but it seems a little over the top that it really is double. I have also had a play with sync=, primarycache settings and a few other things but it doesn''t seem to change the behavious Again - I''m looking for thoughts here - as I have only really just started looking into this. Should I happen across anything interesting, I''ll followup this post. Cheers, Nathan. :)
2009 Aug 21
9
Not sure how to do this in zfs
Hello all, I''ve tried changing all kinds of attributes for the zfs''s, but I can''t seem to find the right configuration. So I''m trying to move some zfs''s under another, it looks like this: /pool/joe_user move to /pool/homes/joe_user I know I can do this with zfs rename, and everything is fine. The problem I''m having is, when I mount
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to; http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913 It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available? Yours Markus Kovero -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 May 07
2
ZFS root ARC memory usage on VxFS system...
Hi Folks.. We have started to convert our Veritas clustered systems over to ZFS root to take advantage of the extreme simplification of using Live Upgrade. Moving the data of these systems off VxVM and VxFS is not in scope for reasons to numerous to go into.. One thing my customers noticed immediately was a reduction in "free" memory as reported by ''top''. By way
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa