Displaying 20 results from an estimated 300 matches similar to: "ZFS Prefetch Tuning"
2012 Dec 01
3
6Tb Database with ZFS
Hello,
Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system''s memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?
We ran some test and the
2011 May 09
1
configure: error: *** Can't find recent OpenSSL libcrypto (see config.log for details) ***
HI,
Getting below error while trying to compile openssh-5.8p2 on *Centos
5.6_X86-64*
*configure: error: *** Can't find recent OpenSSL libcrypto (see config.log
for details) ****
I recently compiled latest open ssl version *OpenSSL 1.0.0d 8 Feb 2011
*please help me to solve this issue.
**
Thanks&Regards,
Abdul Jabbar
* *
2008 Jun 26
1
ECHO CANCELLATION
To the support team,
I am getting confused while studying the manual of speex1.2 Beta 2.
If audio frame capture and playback are used asynchronously then
speex_echo_cancel() function is preferred that is simpler then
speex_echo_cancellation() function.
However when I go through the API manual,
In section 5.4.4.1
It is mentioned that this function is deprecated.
Please tell me the set of
2006 May 19
3
Oracle on ZFS vs. UFS
Hi,
I''m preparing a personal TPC-H benchmark. The goal is not to measure or
optimize the database performance, but to compare ZFS to UFS in similar
configurations.
At the moment I''m preparing the tests at home. The test setup is as
follows:
. Solaris snv_37
. 2 x AMD Opteron 252
. 4 GB RAM
. 2 x 80 GB ST380817AS
. Oracle 10gR2 (small SGA (320m))
The disks also contain the OS
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2008 Feb 14
9
100% random writes coming out as 50/50 reads/writes
I''m running on s10s_u4wos_12b and doing the following test.
Create a pool, striped across 4 physical disks from a storage array.
Write a 100GB file to the filesystem (dd from /dev/zero out to the file).
Run I/O against that file, doing 100% random writes with an 8K block size.
zpool iostat shows the following...
capacity operations bandwidth
pool used
2008 Jun 24
4
zfs send and recordsize
Hi Everyone,
I perform a snapshot and a zfs send on a filesystem with a recordsize
of 16k, and redirect the output to a plain file. Later, I use cat
sentfs | zfs receive otherpool/filesystem. In this case the new
filesystem''s recordsize will be the default 128k again. The other
filesystem attributes (for example atime) are reverted to defaults
too. Okay, I can set these later,
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune
''recordsize'', or would that not be worth the effort?
--
Regards,
Jeremy
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS filesystems contains a number of global and per-user
databases in addition to one sixth of the
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn''t, but I can''t find the reference.
Thanks,
Brian
--
- Brian Gupta
http://opensolaris.org/os/project/nycosug/
2009 Mar 16
1
Forensics related ZFS questions
1. Does variable FSB block sizing extend to files larger than record
size, concerning the last FSB allocated?
In other words, for files larger than 128KB, that utilize more than one
full recordsize FSB, will the LAST FSB allocated be ''right-sized'' to fit
the remaining data, or will ZFS allocate a full recordsize FSB for the
last ''chunk'' of the file? (This is
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2007 May 01
2
Multiple filesystem costs? Directory sizes?
While setting up my new system, I''m wondering whether I should go with plain directories or use ZFS filesystems for specific stuff. About the cost of ZFS filesystems, I read on some Sun blog in the past about something like 64k kernel memory (or whatever) per active filesystem. What are however the additional costs?
The reason I''m considering multiple filesystems is for instance
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2008 May 26
2
SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
Hi All,
I am running nevada 79 BFU''ed to 82. The machine is a Ultra 20 with 4GB
memory. I have several Windows XP domU''s configured and registered.
When ever I try to start the fourth domain I get an out of memory exception:
Not enough memory is available, and dom0 cannot be shrunk any further
Each of my domains only uses 256 so I thought there would be sufficient
memory
2009 Apr 26
9
Peculiarities of COW over COW?
We run our IMAP spool on ZFS that''s derived from LUNs on a Netapp
filer. There''s a great deal of churn in e-mail folders, with messages
appearing and being deleted frequently. I know that ZFS uses copy-on-
write, so that blocks in use are never overwritten, and that deleted
blocks are added to a free list. This behavior would spread the free
list all over the zpool. As well,
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirrored device. Before I do (and I''ll
write up any results) I''d like to know
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2008 Jun 10
3
ZFS space map causing slow performance
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of small (~3KB) reads each second, presumably reading space map data from
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom: