Displaying 20 results from an estimated 8000 matches similar to: "zfs primarycache and secondarycache properties"
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi,
I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26.
Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic.
check the panic @
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing.
Testing on a reasonably sized filesystem (4TB), a diff that listed 41k
changes took 77 minutes. I haven''t tried my old tool, but I would
expect the same diff to take a couple of
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops
of which about 90% are meta data. In hind sight it would have been
significantly better to use a mirrored configuration but we opted for 4 x
(9+2) raidz2 at the time. We can not take the downtime necessary to change
the zpool configuration.
We need to improve the meta data performance with little to no money. Does
anyone
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
All,
Running Samba 3.5.4 on Solaris 10 with ZFS file system. I have
issues where we have shared group folders. In these folders a userA in
GroupA create file just fine with the correct inherited permissions
660. Problem is when userB in GroupA reads and modifies that file, with
M$ office apps, the permissions get whacked to 060+ and the file becomes
read only by everyone.
I did
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that''s a quasi-database. It reads files by
mmap()ing them. (writes are done via write()). We''re talking 100TB of
data in files that are 100k->50G in size (the files have headers to tell
the app what segment to map, so mapped chunks
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms).
I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2009 Aug 21
9
Not sure how to do this in zfs
Hello all,
I''ve tried changing all kinds of attributes for the zfs''s, but I can''t
seem to find the right configuration.
So I''m trying to move some zfs''s under another, it looks like this:
/pool/joe_user move to /pool/homes/joe_user
I know I can do this with zfs rename, and everything is fine. The problem
I''m having is, when I mount
2013 May 09
4
recommended memory for zfs
Hello zfs question about memory.
I heard zfs is very ram hungry.
Service looking to run:
- nginx
- postgres
- php-fpm
- python
I have a machine with two quad core cpus but only 4 G Memory
I'm looking to buy more ram now.
What would be the recommend amount of memory for zfs across 6 drives on
this setup?
Also can 9.1 now boot to zfs from the installer?
(no tricks for post install)
Thanks
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
Hello,
I'm currently using dovecot 1.2.11 on FreeBSD 8.0 with ZFS filesystems.
So far, so good, it works quite nicely, but I have a couple glitches.
Each user has his own zfs partition, mounted on /home/<user> (easier
to set per user quotas) and mail is stored in their home.
From day one, when people check their mail via imap, a lot of indexes
corruption occured :
dovecot:
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump?
Thanks,
~~sa
2011 Jul 30
7
NexentaCore 3.1 - ZFS V. 28
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system,
which is intended to be used as a backup SAN during storage migration,
so it''s built on a tight budget.
The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII
SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to;
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available?
Yours
Markus Kovero
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Jan 10
1
ZFS root backup/"disaster" recovery, and moving root pool
Hi everyone
I am currently testing Solaris 11 Express. I currently have a root pool on a
mirrored pair of small disks, and a data pool consisting of 2 mirrored pairs
of 1.5TB drives.
I have enabled auto snapshots on my root pool, and plan to archive the daily
snapshots onto my data pool. I was wondering how easy it would be, in the
case of a root pool failure (i.e. both disks giving up the
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as "gamer" ones) which offer two SATA
2.5" bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either "get lots of
capacity with RAID0 over two HDDs, or get some capacity
and some performance by mixing one