similar to: simulating directio on zfs?

Displaying 20 results from an estimated 7000 matches similar to: "simulating directio on zfs?"

2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS. Repeat the exercise with directio on UFS and there is no
2007 Oct 02
53
Direct I/O ability with zfs?
We are using MySQL, and love the idea of using zfs for this. We are used to using Direct I/O to bypass file system caching (let the DB do this). Does this exist for zfs? This message posted from opensolaris.org
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2005 Dec 21
4
ZFS, COW, write(2), directIO...
Hi ZFS Team, I have a couple of questions... Assume that the maximum slab size that ZFS supports is x. (I am assuming there is a maximum.) An application does a (single) write(2) for 2x bytes. Does ZFS/COW guarantee that either all the 2x bytes are persistent or none at all? Consider a case where there is a panic after x bytes has gone to disk and the change propagated to the uber block. Do
2008 Dec 02
18
How to dig deeper
In order to get more information on IO performance problems I created the script below: #!/usr/sbin/dtrace -s #pragma D option flowindent syscall::*write*:entry /pid == $1 && guard++ == 0/ { self -> ts = timestamp; self->traceme = 1; printf("fd: %d", arg0); } fbt::: /self->traceme/ { /* elapsd =timestamp - self -> ts; printf("
2006 Oct 26
3
Re: ZFS hangs systems during copy
> ZFS 11.0 on Solaris release 06/06, hangs systems when > trying to copy files from my VXFS 4.1 file system. > any ideas what this problem could be?. What kind of system is that? How much memory is installed? I''m able to hang an Ultra 60 with 256 MByte of main memory, simply by writing big files to a ZFS filesystem. The problem happens with both Solaris 10 6/2006 and Solaris
2012 Dec 01
3
6Tb Database with ZFS
Hello, Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I want to set arc_max parameter so ZFS cant use all my system''s memory, but i dont know how much i should set, do you think 24Gb will be enough for a 6Tb database? obviously the more the better but i cant set too much memory. Have someone implemented succesfully something similar? We ran some test and the
2013 May 09
4
recommended memory for zfs
Hello zfs question about memory. I heard zfs is very ram hungry. Service looking to run: - nginx - postgres - php-fpm - python I have a machine with two quad core cpus but only 4 G Memory I'm looking to buy more ram now. What would be the recommend amount of memory for zfs across 6 drives on this setup? Also can 9.1 now boot to zfs from the installer? (no tricks for post install) Thanks
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2004 Jul 08
0
directio for ext3 file system
Hi, Does anybody know whether the ext3 file system support Direct_io? if so, how do you enable it? I went through the man page of mount, and it did not mention such option? my system is running : Red Hat Enterprise Linux AS release 3 (Taroon Update 2) Kernel 2.4.21-15.ELsmp on an i686 Thanks much!!! David. __________________________________ Do you Yahoo!? Yahoo! Mail Address
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship between ZFS, the ARC, and the page cache. We have an application that''s a quasi-database. It reads files by mmap()ing them. (writes are done via write()). We''re talking 100TB of data in files that are 100k->50G in size (the files have headers to tell the app what segment to map, so mapped chunks
2009 Aug 24
2
[RFC] Early look at btrfs directIO read code
This is my still-working-on-it code for btrfs directIO read. I''m posting it so people can see the progress being made on the project and can take an early shot at telling me this is just a bad idea and I''m crazy if they want to, or point out where I made some stupid mistake with btrfs core functions. The code is not complete and *NOT* ready for review or testing. I looked at
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>: > On Wed, 26 Oct 2011, Christian Brunner wrote: >> >> > Christian, have you tweaked those settings in your ceph.conf?  It would be >> >> > something like ''journal dio = false''.  If not, can you verify that >> >> > directio shows true when the journal is initialized from your osd log?
2007 Jan 08
11
NFS and ZFS, a fine combination
Just posted: http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine ____________________________________________________________________________________ Performance, Availability & Architecture Engineering Roch Bourbonnais Sun Microsystems, Icnc-Grenoble Senior Performance Analyst 180, Avenue De L''Europe, 38330, Montbonnot Saint
2006 Oct 15
3
open(2) O_DIRECT on smbmount gives EINVAL
Does samba 3.0.23c not support the use of O_DIRECT? When I try to open an smbmount'd file using O_DIRECT, I get EINVAL. I am able to use O_DIRECT with no problems on a block device and nfs mounts, so I know the kernel supports it. samba: 3.0.23c kernel: 2.6.9-42.0.3.EL (32-bit) I am using the below code for my test. smb fails on open(2). #include <fcntl.h> #include
2007 Sep 17
4
ZFS Evil Tuning Guide
Tuning should not be done in general and Best practices should be followed. So get very much acquainted with this first : http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Then if you must, this could soothe or sting : http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide So drive carefully. -r
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2010 Feb 19
3
samba file locking
Hi samba experts, We have a strange file locking problem and i hope someone can help. We use some CentOS 5 servers, which use samba 3.0.33, to share files of a java application to clients. Clients are mostly CentOS 5 (same version as the server), but there are a few legacy windows clients (the reason why we use samba and not nfs). And now the problem. When our developer uploads a new jar file to