Displaying 20 results from an estimated 2000 matches similar to: "ZFS Evil Tuning Guide"
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2007 Feb 13
4
Best Practises => Keep Pool Below 80%?
In the ZFS Best Practises Guide here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
``Currently, pool performance can degrade when a pool is very full
and file systems are updated frequently, such as on a busy mail
server. Under these circumstances, keep pool space under 80%
utilization to maintain pool performance.''''
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2008 Dec 19
4
ZFS boot and data on same disk - is this supported?
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a mirrored environment? Does anyone have examples
of customers doing this in production environments.
I
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations>
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration examples, none of the 3 examples follow this recommendation!
I will have 10 disks to put into a
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2008 Sep 11
4
ZFS Panicing System Cluster Crash effect
Issues with ZFS and Sun Cluster
If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic. Zpool was obviously not exported in controlled fashion because of hard crash. Storage structure is - HW RAID protected LUN from array. Zpool build on single HW LUN. Zpool created
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS.
Repeat the exercise with directio on UFS and there is no
2010 Apr 28
3
Solaris 10 default caching segmap/vpm size
Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2008 Sep 14
10
ZFS system requirements
Hi, this says that opensolaris only requires 512MB ram: http://dlc.sun.com/osol/docs/content/IPS/sysreq.html
This says 1GB ram and a 64bit processor are recommended: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Swap_Space
Am I going to have problems if I run opensolaris and zfs at the minimum requirements?
--
This message posted from opensolaris.org
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a single rpool
on cXtYd0s2, if I can even do that, and improved performance compared to
having two
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2007 Sep 21
4
ZFS (and quota)
I''m CCing zfs-discuss at opensolaris.org, as this doesn''t look like
FreeBSD-specific problem.
It looks there is a problem with block allocation(?) when we are near
quota limit. tank/foo dataset has quota set to 10m:
Without quota:
FreeBSD:
# dd if=/dev/zero of=/tank/test bs=512 count=20480
time: 0.7s
Solaris:
# dd if=/dev/zero of=/tank/test bs=512 count=20480
time: 4.5s
2008 Nov 21
4
MFC ZFS: when?
In several of the recent ZFS posts, multiple people have asked when this
will be MFC'd to 7.x. This query has been studiously ignored as other
chatter about whatever ZFS issue is discussed.
So in a post with no other bug report or discussion content to distract us,
when is it intended that ZFS be MFC'd to 7.x?
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the
O_SYNC flag. This affects subsequent write()''s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he''s seeing some delays on the file write()''s. This is
verified with