similar to: SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush

Displaying 20 results from an estimated 1000 matches similar to: "SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush"

2007 Nov 28
0
[storage-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Nicolas Dorfsman wrote: > Le 27 nov. 07 ? 16:17, Torrey McMahon a ?crit : > >> According to the array vendor the 99xx arrays no-op the cache flush >> command. No need to set the /etc/system flag. >> >> http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_arrays >> >> > > > Perfect ! > > Thanks Torrey. > > Just realize
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have a system connected to an external DAS (SCSI) array, using ZFS. the array has an nvram write cache, but it honours SCSI cache flush commands by flushing the nvram to disk. the array has no way to disable this behaviour. a well-known behaviour of ZFS is that it often issues cache flush commands to storage in order to ensure data
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices should be followed. So get very much acquainted with this first : http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Then if you must, this could soothe or sting : http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide So drive carefully. -r
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can''t do it until I have another update. Heck, I would be happy to just be able to turn the ZIL off to see how my NFS on ZFS performance is effected before spending the $''s. Anyone
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a RAIDZ2 exported over fibre channel) but there''s no such thing as too much speed, and these other two drive bays are just begging for drives in them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase speed, or will the extra parity writes reduce speed, or will the two factors offset and leave things
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi, I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b] So is this variable not available in the Solaris kernel? I''m getting really poor
2009 Mar 04
5
Oracle database on zfs
Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all... I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends. So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability. Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2008 Jul 30
2
zfs_nocacheflush
A question regarding zfs_nocacheflush: The Evil Tuning Guide says to only enable this if every device is protected by NVRAM. However, is it safe to enable zfs_nocacheflush when I also have local drives (the internal system drives) using ZFS, in particular if the write cache is disabled on those drives? What I have is a local zfs pool from the free space on the internal drives, so I''m
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing. The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS. Repeat the exercise with directio on UFS and there is no
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2008 Feb 21
3
raidz2 resilience on 3 disks
Hello, 1) If i create a raidz2 pool on some disks, start to use it, then the disks'' controllers change. What will happen to my zpool? Will it be lost or is there some disk tagging which allows zfs to recognise the disks? 2) if i create a raidz2 on 3 HDs, do i have any resilience? If any one of those drives fails, do i loose everything? I''ve got one such pool and
2009 Dec 02
10
Separate Zil on HDD ?
Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several ''lab'' ESX virtual machines NFS : mythtv storage (videos, music, recordings etc) Samba : for home directories for all networked PCs I backup the important data to external USB hdd each day. I previously had
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2007 Feb 17
8
ZFS with SAN Disks and mutipathing
Hi, I just deploy the ZFS on an SAN attach disk array and it''s working fine. How do i get dual pathing advantage of the disk ( like DMP in Veritas). Can someone point to correct doc and setup. Thanks in Advance. Rgds Vikash Gupta This message posted from opensolaris.org