search for: recordsize

Displaying 20 results from an estimated 78 matches for "recordsize".

Did you mean: record_size
2008 Jun 24
4
zfs send and recordsize
Hi Everyone, I perform a snapshot and a zfs send on a filesystem with a recordsize of 16k, and redirect the output to a plain file. Later, I use cat sentfs | zfs receive otherpool/filesystem. In this case the new filesystem''s recordsize will be the default 128k again. The other filesystem attributes (for example atime) are reverted to defaults too. Okay, I can...
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size. zpool iostat shows the following... capacity operations bandwidth pool used
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy
2007 May 01
2
Multiple filesystem costs? Directory sizes?
...st of ZFS filesystems, I read on some Sun blog in the past about something like 64k kernel memory (or whatever) per active filesystem. What are however the additional costs? The reason I''m considering multiple filesystems is for instance easy ZFS backups and snapshots, but also tuning the recordsizes. Like storing lots of generic pictures from the web, smaller recordsizes may be appropriate to trim down the waste once the filesize surpasses the record size, aswell as using large recordsizes for video files on a seperate filesystem. Turning on and off compression and access times for performanc...
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files - for example, 20 million TIFF images in a collection, each between 5 and 500k in size? I am asking because I could have sworn that I read somewhere that it isn''t, but I can''t find the reference. Thanks, Brian -- - Brian Gupta http://opensolaris.org/os/project/nycosug/
2009 Mar 16
1
Forensics related ZFS questions
1. Does variable FSB block sizing extend to files larger than record size, concerning the last FSB allocated? In other words, for files larger than 128KB, that utilize more than one full recordsize FSB, will the LAST FSB allocated be ''right-sized'' to fit the remaining data, or will ZFS allocate a full recordsize FSB for the last ''chunk'' of the file? (This is a file slack issue re: how much will exist.) 2. Can a developer confirm that COW occurs at the FS...
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks. I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool...
2010 Mar 01
1
ARC & Maxphys & recordsize
Greeting ALL Can someone explain to me why I was successfully able issue (through application) an I/Os of 128K each (monitored by DTrace) while my Maxphys=56K only ?????? My understanding is that Maxphys is the max I/O size that the storage device can handle for a single I/O, it has been well documented that if and application issued an I/O larger than Maxphys, it would be break down to
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/...
2010 Dec 09
3
ZFS Prefetch Tuning
...he prefetch misses are due to the tablespace data files. The configuration of the system is as follows Sun Fire X4600 M2 8 x 2.3 GHz Quad Core Processor, 256GB Memory Solaris 10 Update 7 ZFS Arc cache max set to 85GB 4 Zpools configured from a 6540 Storage array * apps - single LUN (raid 5) recordsize set to 128k, from the array, pool contains binaries and application files * backup - 8 LUNs (varying sizes all from a 6180 array with SATA disks) used for storing oracle dumps * data - 5 LUNs (Raid 10 6 physical drives) recordsize set to 8k, used for Oracle data files...
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a disaster as we might expect. To complicate things, writes are coming in over NFS. Reads may be local or may be via NFS and may be random. Once written, data is not changed until removed. No Z RAID''...
2008 May 26
2
SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
Hi All, I am running nevada 79 BFU''ed to 82. The machine is a Ultra 20 with 4GB memory. I have several Windows XP domU''s configured and registered. When ever I try to start the fourth domain I get an out of memory exception: Not enough memory is available, and dom0 cannot be shrunk any further Each of my domains only uses 256 so I thought there would be sufficient memory
2006 May 19
3
Oracle on ZFS vs. UFS
Hi, I''m preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I''m preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x 80 GB ST380817AS . Oracle 10gR2 (small SGA (320m)) The disks also contain the OS
2009 Apr 26
9
Peculiarities of COW over COW?
We run our IMAP spool on ZFS that''s derived from LUNs on a Netapp filer. There''s a great deal of churn in e-mail folders, with messages appearing and being deleted frequently. I know that ZFS uses copy-on- write, so that blocks in use are never overwritten, and that deleted blocks are added to a free list. This behavior would spread the free list all over the zpool. As well,
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux ext3 over iSCSI to zvols, especially with small writes. Does running a journalled filesystem to a zvol turn the block storage into swiss cheese? I am considering serving ext3 journals (and possibly swap too) off a raw, hardware-mirrored device. Before I do (and I''ll write up any results) I''d like to know
2008 Jun 10
3
ZFS space map causing slow performance
...hat are having this problem? They are production servers, and I have customers complaining, so a temporary fix is needed. 2) Is there any sort of tuning I can do with future servers to prevent this from becoming a problem? Perhaps a way to make sure all the space maps are always in RAM? 3) I set recordsize=32K and turned off compression, thinking that should fix the performance problem for now. However, using a DTrace script to watch calls to space_map_alloc(), I see that it''s still looking for 128KB blocks (!!!) for reasons that are unclear to me, thus it hasn''t helped the problem...
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I''m not afraid of ext4''s newness, since really a lot of that stuff has been in Lustre for years. So a-benchmarking I went. Results at the bottom:
2012 Dec 01
3
6Tb Database with ZFS
Hello, Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I want to set arc_max parameter so ZFS cant use all my system''s memory, but i dont know how much i should set, do you think 24Gb will be enough for a 6Tb database? obviously the more the better but i cant set too much memory. Have someone implemented succesfully something similar? We ran some test and the