Displaying 14 results from an estimated 14 matches for "zfs_nocacheflush".
2008 Jul 30
2
zfs_nocacheflush
A question regarding zfs_nocacheflush:
The Evil Tuning Guide says to only enable this if every device is
protected by NVRAM.
However, is it safe to enable zfs_nocacheflush when I also have
local drives (the internal system drives) using ZFS, in particular if
the write cache is disabled on those drives?
What I have is a local zfs poo...
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi,
I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b]
So is this variable not available in the Solaris kernel?
I''m getting really poor write performance with ZFS on a RAID5 volum...
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
...well-known behaviour of ZFS is that it often issues cache flush commands to
storage in order to ensure data integrity; while this is important with normal
disks, it''s useless for nvram write caches, and it effectively disables the
cache.
so far, i''ve worked around this by setting zfs_nocacheflush, as described at
[1], which works fine. but now i want to upgrade this system to Solaris 10
Update 6, and use a ZFS root pool on its internal SCSI disks (previously, the
root was UFS). the problem is that zfS_nocacheflush applies to all pools,
which will include the root pool.
my understanding o...
2007 Nov 28
0
[storage-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Nicolas Dorfsman wrote:
> Le 27 nov. 07 ? 16:17, Torrey McMahon a ?crit :
>
>> According to the array vendor the 99xx arrays no-op the cache flush
>> command. No need to set the /etc/system flag.
>>
>> http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_arrays
>>
>>
>
>
> Perfect !
>
> Thanks Torrey.
>
>
Just realize
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state storage, but I fear I can''t do it until I
have another update. Heck, I would be happy to just be able to turn the
ZIL off to see how my NFS on ZFS performance is effected before spending
the $''s. Anyone
2007 Sep 28
4
Sun 6120 array again
...sion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We''re about to
reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable
zfs_nocacheflush is not going to serve us well (there is a ZFS pool on
slices of internal SAS drives, along with UFS boot/OS slices).
Any pointers would be appreciated.
Thanks and regards,
Marion
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11.
According to my testing, to optimize our systems for our specific
workload, I''ve determined that we get the best performance with the
write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set
in /etc/system.
The only issue is setting the write cache permanently, or at least quickly.
Right now, as it is, I''ve scripted up format to run on boot, disabling
the write cache of all disks. This takes around two minutes. I''d like to
avoid needing to take this time on...
2007 Sep 04
23
I/O freeze after a disk failure
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52):
Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2009 Apr 11
17
Supermicro SAS/SATA controllers?
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I''m wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the LSI
controllers are PCI-E.
Supermicro have several LSI controllers. AOC-USASLP-L8i with the