Displaying 20 results from an estimated 23 matches for "zfs_evil_tuning_guid".
Did you mean:
zfs_evil_tuning_guide
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here sta...
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
So drive carefully.
-r
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS.
Repeat the exercise with directio on UFS and there is no
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the
O_SYNC flag. This affects subsequent write()''s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he''s seeing some delays on the file write()''s. This is
verified with
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
...ool.
am i right, or could i encounter problems here?
(the system is an NFS server, which means lots of synchronous writes (and
therefore ZFS cache flushes), so i *really* want the performance benefit from
using the nvram write cache.)
- river.
[1] http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
-----BEGIN PGP SIGNATURE-----
iD8DBQFJNRJVIXd7fCuc5vIRAgDlAJ0boVf5zmvkRySeIHVumsKm3VSVhACffyOK
POEMyzG8U2yQYeZr01uJ74Q=
=9eBp
-----END PGP SIGNATURE-----
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2007 Oct 24
1
memory issue
Hello,
I received the following question from a company I am working with:
We are having issues with our early experiments with ZFS with volumes
mounted from a 6130.
Here is what we have and what we are seeing:
T2000 (geronimo) on the fibre with a 6130.
6130 configured with UFS volumes mapped and mounted on several
other hosts.
it''s the only host using ZFS volume (only
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
...to add redundancy to compensate for the fact that one
controller would then be able to take out three drives.)
I''ve considered adding a drive for the ZIL instead, but my experiments
in disabling the ZIL (using the evil tuning guide at
<http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabl
ing_the_ZIL_.28Don.27t.29>) didn''t show any speed increase. (I know it''s a
bad idea run the system with ZIL disabled; I disabled it only to measure its
impact on my write speeds and re-enabled it after testing was complete.)
Current system:
OpenSolaris dev release b132...
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing.
The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2008 Nov 21
4
MFC ZFS: when?
In several of the recent ZFS posts, multiple people have asked when this
will be MFC'd to 7.x. This query has been studiously ignored as other
chatter about whatever ZFS issue is discussed.
So in a post with no other bug report or discussion content to distract us,
when is it intended that ZFS be MFC'd to 7.x?
2009 Oct 10
11
SSD over 10gbe not any faster than 10K SAS over GigE
GigE wasn''t giving me the performance I had hoped for so I spring for some 10Gbe cards. So what am I doing wrong.
My setup is a Dell 2950 without a raid controller, just a SAS6 card. The setup is as such
:
mirror rpool (boot) SAS 10K
raidz SSD 467 GB on 3 Samsung 256 MLC SSD (220MB/s each)
to create the raidz I did a simple zpool create raidz SSD c1xxxxx c1xxxxxx c1xxxxx. I have
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but