similar to: Is ZFS efficient for large collections of small files?

Displaying 20 results from an estimated 2000 matches similar to: "Is ZFS efficient for large collections of small files?"

2008 Jun 24
4
zfs send and recordsize
Hi Everyone, I perform a snapshot and a zfs send on a filesystem with a recordsize of 16k, and redirect the output to a plain file. Later, I use cat sentfs | zfs receive otherpool/filesystem. In this case the new filesystem''s recordsize will be the default 128k again. The other filesystem attributes (for example atime) are reverted to defaults too. Okay, I can set these later,
2007 Sep 04
23
I/O freeze after a disk failure
Hi all, yesterday we had a drive failure on a fc-al jbod with 14 drives. Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages: Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52): Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size. zpool iostat shows the following... capacity operations bandwidth pool used
2007 Dec 04
2
X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.
Hey Guys, Have any of y''all seen a condition where the ILOM considers a disk faulted (status is 3 instead of 1), but ZFS keeps writing to the disk and doesn''t report any errors? I''m going to do a scrub tomorrow and see what comes back. I''m curious what caused the ILOM to fault the disk. Any advice is greatly appreciated. Best Regards, Jason P.S. The system is
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi, I''m struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham''s blog articles :-). Now we try on the real thing, two X4500 servers. Well, I have no trouble replicating our kernel panics there, too ... but I think I
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All, Is there a way to tune the zfs prefetch on a per pool basis? I have a customer that is seeing slow performance on a pool the contains multiple tablespaces from an Oracle database, looking at the LUNs associated to that pool they are constantly at 80% - 100% busy. Looking at the output from arcstat for the miss % on data, prefetch and metadata we are getting around 5 - 10 % on data,
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2007 Oct 08
5
Is Puppet similar to Capistrano?
I discovered Capistrano while I was trying to figure out what I wanted. See attached notes. http://www.genunix.org/wiki/index.php/GNOSIS/Kraken Puppet seems promising. Thanks, Brian -- - Brian Gupta http://opensolaris.org/os/project/nycosug/
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2007 Aug 10
9
Problems monitoring Mongrel with F5 BigIP
If this has already been covered, please point me to that (I didn''t find anything in my searches)... We are using F5 BigIP LTM load balancers. They have many pools of Mongrels they load balance across, and I of course want the F5 to know when a Mongrel goes down or is unavailable, etc. To do that, I need to have an F5 health monitor for HTTP make a request to the Mongrel. We do this
2006 May 19
3
Oracle on ZFS vs. UFS
Hi, I''m preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I''m preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x 80 GB ST380817AS . Oracle 10gR2 (small SGA (320m)) The disks also contain the OS
2007 May 03
5
ZFS vs UFS2 overhead and may be a bug?
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid says this problem also exists on Solaris hence this email.] Summary: on ZFS, overhead for reading a hole seems far worse than actual reading from a disk. Small buffers are used to make this overhead more visible. I ran the following script on both ZFS and UF2 filesystems. [Note that on FreeBSD cat uses a 4k buffer and md5 uses a 1k
2012 May 17
6
Question about plans for the forge.
Currently and going forward people will be running multiple versions of puppet. What are the plans for puppet compatibility with Modules? Thinking we may want to be able to specify what version of Puppet is running and ask for the compatible module. (Which may be the same). Thanks, Brian -- You received this message because you are subscribed to the Google Groups "Puppet Users"
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2011 Sep 23
21
Official puppetlabs position on cron vs puppet as a service?
Over the years many shops have come to start running puppet via cron to address memory leaks in earlier versions of Ruby, but the official position was that puppet was meant to be run as a continually running service. I am wondering if the official position has changed. On one hand many if not all of the early Ruby issues have been fixed, on the other, the addition of mcollective into the mix as
2007 May 01
2
Multiple filesystem costs? Directory sizes?
While setting up my new system, I''m wondering whether I should go with plain directories or use ZFS filesystems for specific stuff. About the cost of ZFS filesystems, I read on some Sun blog in the past about something like 64k kernel memory (or whatever) per active filesystem. What are however the additional costs? The reason I''m considering multiple filesystems is for instance