Displaying 20 results from an estimated 2000 matches similar to: "read/write NFS block size and ZFS"
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
So drive carefully.
-r
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this
2007 Feb 13
4
Best Practises => Keep Pool Below 80%?
In the ZFS Best Practises Guide here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
``Currently, pool performance can degrade when a pool is very full
and file systems are updated frequently, such as on a busy mail
server. Under these circumstances, keep pool space under 80%
utilization to maintain pool performance.''''
2008 Dec 19
4
ZFS boot and data on same disk - is this supported?
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a mirrored environment? Does anyone have examples
of customers doing this in production environments.
I
2006 Oct 17
10
ZFS, home and Linux
Hello,
I''m trying to implement a NAS server with solaris/NFS and, of course, ZFS. But for that, we have a little problem... what about the /home filesystem? I mean, i have a lot of linux clients, and the "/home" directory is on a NFS server (today, linux). I want to use ZFS, and
change the "directory" home like /home/leal, to "filesystems" like
/home/leal
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS.
Repeat the exercise with directio on UFS and there is no
2008 Mar 13
3
Round-robin NFS protocol with ZFS
Hello all,
I was thinking if such scenario could be possible:
1 - Export/import a ZFS filesystem in two solaris servers.
2 - Export that filesystem (NFS).
3 - Mount that filesystem on clients in two different mount points (just to authenticate in both servers/UDP).
4a - Use some kind of "man-in-the middle" to auto-balance the connections (the same IP on servers)
or
4b - Use different
2010 Apr 28
3
Solaris 10 default caching segmap/vpm size
Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations>
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration examples, none of the 3 examples follow this recommendation!
I will have 10 disks to put into a
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ...
I''m actually speaking of hardware :)
ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks.
I want to
2009 Aug 05
2
?: SMI vs. EFI label and a disk''s write cache
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a single rpool
on cXtYd0s2, if I can even do that, and improved performance compared to
having two
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends.
So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability.
Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2010 Feb 12
13
SSD and ZFS
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool over
the first two disks
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the
O_SYNC flag. This affects subsequent write()''s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he''s seeing some delays on the file write()''s. This is
verified with
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2007 Oct 24
1
memory issue
Hello,
I received the following question from a company I am working with:
We are having issues with our early experiments with ZFS with volumes
mounted from a 6130.
Here is what we have and what we are seeing:
T2000 (geronimo) on the fibre with a 6130.
6130 configured with UFS volumes mapped and mounted on several
other hosts.
it''s the only host using ZFS volume (only
2007 Sep 26
9
Rule of Thumb for zfs server sizing with (192) 500 GB SATA disks?
I''m trying to get maybe 200 MB/sec over NFS for large movie files (need large capacity to hold all of them). Are there any rules of thumb on how much RAM is needed to handle this (probably RAIDZ for all the disks) with zfs, and how large a server should be used? The throughput required is not so large, so I am thinking an X4100 M2 or X4150 should be plenty.
This message posted from
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can