similar to: Multiple filesystem costs? Directory sizes?

Displaying 20 results from an estimated 8000 matches similar to: "Multiple filesystem costs? Directory sizes?"

2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn''t, but I can''t find the reference. Thanks, Brian -- - Brian Gupta http://opensolaris.org/os/project/nycosug/
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2008 Jun 24
4
zfs send and recordsize
Hi Everyone, I perform a snapshot and a zfs send on a filesystem with a recordsize of 16k, and redirect the output to a plain file. Later, I use cat sentfs | zfs receive otherpool/filesystem. In this case the new filesystem''s recordsize will be the default 128k again. The other filesystem attributes (for example atime) are reverted to defaults too. Okay, I can set these later,
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2009 Mar 16
1
Forensics related ZFS questions
1. Does variable FSB block sizing extend to files larger than record size, concerning the last FSB allocated? In other words, for files larger than 128KB, that utilize more than one full recordsize FSB, will the LAST FSB allocated be ''right-sized'' to fit the remaining data, or will ZFS allocate a full recordsize FSB for the last ''chunk'' of the file? (This is
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2008 May 01
9
ZFS and Linux
Hi All ; What is the status of ZFS on linux and what are the kernel''s supported? Regards Mertol <http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email <mailto:Ayca.Yalcin at Sun.COM> mertol.ozyoney at
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All, Is there a way to tune the zfs prefetch on a per pool basis? I have a customer that is seeing slow performance on a pool the contains multiple tablespaces from an Oracle database, looking at the LUNs associated to that pool they are constantly at 80% - 100% busy. Looking at the output from arcstat for the miss % on data, prefetch and metadata we are getting around 5 - 10 % on data,
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy
2006 May 19
3
Oracle on ZFS vs. UFS
Hi, I''m preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I''m preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x 80 GB ST380817AS . Oracle 10gR2 (small SGA (320m)) The disks also contain the OS
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2008 May 18
2
possible zfs bug? lost all pools
after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 0xbefb4a0f). See: http://www.sun.com/msg/ZFS-8000-EY any zpool cmd returned nothing else as not existing zfs, seems the zfs info on disks
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size. zpool iostat shows the following... capacity operations bandwidth pool used
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2005 Apr 26
5
Is shorewall comptible with hipac?
Hi all, http://www.hipac.org/index.htm I have just discovered this great project. It seems it surpasses standard netfilter in performance. The documentation states they are more or less compatible with standard netfilter, but anybody has tested if it is compatible with shorewall? Tom, have you? Regards -- Jaime Nebrera - jnebrera@eneotecnologia.com Consultor TI - ENEO Tecnologia SL
2007 Apr 19
14
Permanently removing vdevs from a pool
Is it possible to gracefully and permanently remove a vdev from a pool without data loss? The type of pool in question here is a simple pool without redundancies (i.e. JBOD). The documentation mentions for instance offlining, but without going into the end results of doing that. The thing I''m looking for is an option to evacuate, for the lack of a better word, the data from a specific
2007 May 03
5
ZFS vs UFS2 overhead and may be a bug?
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid says this problem also exists on Solaris hence this email.] Summary: on ZFS, overhead for reading a hole seems far worse than actual reading from a disk. Small buffers are used to make this overhead more visible. I ran the following script on both ZFS and UF2 filesystems. [Note that on FreeBSD cat uses a 4k buffer and md5 uses a 1k
2008 Jun 10
3
ZFS space map causing slow performance
Hello, I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of small (~3KB) reads each second, presumably reading space map data from