Displaying 20 results from an estimated 3000 matches similar to: "simulating directio on zfs?"
2009 Apr 20
6
simulating directio on zfs?
I had to let this go and get on with testing DB2 on Solaris. I had to
abandon zfs on local discs in x64 Solaris 10 5/08.
The situation was that:
* DB2 buffer pools occupied up to 90% of 32GB RAM on each host
* DB2 cached the entire database in its buffer pools
o having the file system repeat this was not helpful
* running high-load DB2 tests for 2 weeks showed 100%
2007 Nov 27
4
SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush
Hi,
I read some articles on solarisinternals.com like "ZFS_Evil_Tuning_Guide" on http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They clearly suggest to disable cache flush http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
So drive carefully.
-r
2009 Mar 04
5
Oracle database on zfs
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a small database with about 18gb storage allocated.
Is there a tunable parameters that we can apply to
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs?
This message posted from opensolaris.org
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing.
The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2008 Dec 02
18
How to dig deeper
In order to get more information on IO performance problems I created the script below:
#!/usr/sbin/dtrace -s
#pragma D option flowindent
syscall::*write*:entry
/pid == $1 && guard++ == 0/
{
self -> ts = timestamp;
self->traceme = 1;
printf("fd: %d", arg0);
}
fbt:::
/self->traceme/
{
/* elapsd =timestamp - self -> ts;
printf("
2007 Oct 24
1
memory issue
Hello,
I received the following question from a company I am working with:
We are having issues with our early experiments with ZFS with volumes
mounted from a 6130.
Here is what we have and what we are seeing:
T2000 (geronimo) on the fibre with a 6130.
6130 configured with UFS volumes mapped and mounted on several
other hosts.
it''s the only host using ZFS volume (only
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>:
> On Wed, 26 Oct 2011, Christian Brunner wrote:
>> >> > Christian, have you tweaked those settings in your ceph.conf? It would be
>> >> > something like ''journal dio = false''. If not, can you verify that
>> >> > directio shows true when the journal is initialized from your osd log?
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi,
I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping.
Is this possibile with ZFS?
As far as I understood, if I use
zpool create myPool lun-1 lun-2 ... lun-n
I will get a RAID0 striping where each data block is split across all "n" LUNs.
If that''s
2009 Aug 24
2
[RFC] Early look at btrfs directIO read code
This is my still-working-on-it code for btrfs directIO read.
I''m posting it so people can see the progress being made on
the project and can take an early shot at telling me this is
just a bad idea and I''m crazy if they want to, or point out
where I made some stupid mistake with btrfs core functions.
The code is not complete and *NOT* ready for review or testing.
I looked at
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2005 Dec 21
4
ZFS, COW, write(2), directIO...
Hi ZFS Team,
I have a couple of questions...
Assume that the maximum slab size that ZFS supports is x. (I am assuming
there is a maximum.) An application does a (single) write(2) for 2x
bytes. Does ZFS/COW guarantee that either all the 2x bytes are
persistent or none at all? Consider a case where there is a panic after
x bytes has gone to disk and the change propagated to the uber block. Do
2008 May 20
7
IO probes and forcedirectio
Hi,
I''m working on some performance analysis with our database and it seems that when the file system (UFS) is mounted with forcedirectio, the IO probe are not triggered when an I/O event occurs.
Could you confirm that ? If so, why ?
Seb
--
This message posted from opensolaris.org
2008 Dec 19
4
ZFS boot and data on same disk - is this supported?
I have read the ZFS best practice guide located at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
However I have questions whether we support using slices for data on the
same disk as we use for ZFS boot. What issues does this create if we
have a disk failure in a mirrored environment? Does anyone have examples
of customers doing this in production environments.
I
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2008 Jul 07
1
ZFS and Caching - write() syscall with O_SYNC
IHAC using ZFS in production, and he''s opening up some files with the
O_SYNC flag. This affects subsequent write()''s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he''s seeing some delays on the file write()''s. This is
verified with
2009 May 13
4
backup and restore of ZFS root disk using DVD driveand DAT tape drive
Dear all,
given a DVD drive and DAT Tape Drive, and using Solaris 10 U7 (5/09),
how can we plan for a total backup of ZFS root disk and procedure to
recover that?
Previously using UFS, we just need to use boot from Solaris OS DVD
media, also using ufsdump, ufsrestore and installboot.
Anybody can point me on how to achieve the same thing when the whole
system disk are busted?
Thanks in
2008 Mar 27
3
kernel memory and zfs
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
root at servername:~/zonecfg #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm