Displaying 20 results from an estimated 20000 matches similar to: "Self-tuning recordsize"
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn''t, but I can''t find the reference.
Thanks,
Brian
--
- Brian Gupta
http://opensolaris.org/os/project/nycosug/
2008 Jun 24
4
zfs send and recordsize
Hi Everyone,
I perform a snapshot and a zfs send on a filesystem with a recordsize
of 16k, and redirect the output to a plain file. Later, I use cat
sentfs | zfs receive otherpool/filesystem. In this case the new
filesystem''s recordsize will be the default 128k again. The other
filesystem attributes (for example atime) are reverted to defaults
too. Okay, I can set these later,
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test.
Create a pool, striped across 4 physical disks from a storage array.
Write a 100GB file to the filesystem (dd from /dev/zero out to the file).
Run I/O against that file, doing 100% random writes with an 8K block size.
zpool iostat shows the following...
capacity operations bandwidth
pool used
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I would like to know the blocksize of a particular file. I know the
blocksize for a particular file is decided at creation time, in fuction
of the write size done and the recordsize property of the dataset.
How can I access that information?. Some zdb magic?.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at
2007 Mar 21
3
zfs send speed
Howdy folks.
I''ve a customer looking to use ZFS in a DR situation. They have a large
data store where they will be taking snapshots every N minutes or so,
sending the difference of the snapshot and previous snapshot with zfs
send -i to a remote host, and in case of DR firing up the secondary.
However, I''ve seen a few references to the speed of zfs send being,
well, a bit
2006 May 23
4
Misc questions
Some miscellaneous questions:
* When you share a ZFS fs via NFS, what happens to files and filesystems that exceed the limits of NFS?
* Is there a recommendation or some guidelines to help answer the question "how full should a pool be before deciding it''s time add disk space to a pool?"
* Migrating pre-ZFS backups to ZFS backups: is there a better method than "restore
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2006 Oct 17
10
ZFS, home and Linux
Hello,
I''m trying to implement a NAS server with solaris/NFS and, of course, ZFS. But for that, we have a little problem... what about the /home filesystem? I mean, i have a lot of linux clients, and the "/home" directory is on a NFS server (today, linux). I want to use ZFS, and
change the "directory" home like /home/leal, to "filesystems" like
/home/leal
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2010 Mar 01
1
ARC & Maxphys & recordsize
Greeting ALL
Can someone explain to me why I was successfully able issue (through
application) an I/Os of 128K each (monitored by DTrace) while my Maxphys=56K
only ??????
My understanding is that Maxphys is the max I/O size that the storage device
can handle for a single I/O, it has been well documented that if and
application issued an I/O larger than Maxphys, it would be break down to
2007 May 02
16
ZFS Support for remote mirroring
Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven''t heard anything lately.
Thanks!
Aaron Newcomb
http://opennewsshow.org
http://thesourceshow.org
This message posted from opensolaris.org
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All,
Is there a way to tune the zfs prefetch on a per pool basis? I have a
customer that is seeing slow performance on a pool the contains multiple
tablespaces from an Oracle database, looking at the LUNs associated to
that pool they are constantly at 80% - 100% busy. Looking at the output
from arcstat for the miss % on data, prefetch and metadata we are
getting around 5 - 10 % on data,
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accomplished by
storing up to three DVAs (Disk Virtual Addresses) in each block pointer.
This feature is known as "Ditto Blocks". When
2006 Sep 11
95
Proposal: multiple copies of user data
Here is a proposal for a new ''copies'' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accomplished by
storing up to three DVAs (Disk Virtual Addresses) in each block pointer.
This feature is known as "Ditto Blocks". When
2007 Nov 29
10
ZFS write time performance question
HI,
The question is a ZFS performance question in reguards to SAN traffic.
We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results.
Test Setup:
Solaris 10: 11/06
Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS)
Sun Fire v490 server
LSI Raid 3994 on backend
ZFS Record Size: 128KB (default)
VxFS Block Size: 8KB(default)
The only thing
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirrored device. Before I do (and I''ll
write up any results) I''d like to know