Displaying 20 results from an estimated 4000 matches similar to: "100% random writes coming out as 50/50 reads/writes"
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune
''recordsize'', or would that not be worth the effort?
--
Regards,
Jeremy
2007 Aug 21
12
Is ZFS efficient for large collections of small files?
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn''t, but I can''t find the reference.
Thanks,
Brian
--
- Brian Gupta
http://opensolaris.org/os/project/nycosug/
2006 May 19
3
Oracle on ZFS vs. UFS
Hi,
I''m preparing a personal TPC-H benchmark. The goal is not to measure or
optimize the database performance, but to compare ZFS to UFS in similar
configurations.
At the moment I''m preparing the tests at home. The test setup is as
follows:
. Solaris snv_37
. 2 x AMD Opteron 252
. 4 GB RAM
. 2 x 80 GB ST380817AS
. Oracle 10gR2 (small SGA (320m))
The disks also contain the OS
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All,
Is there a way to tune the zfs prefetch on a per pool basis? I have a
customer that is seeing slow performance on a pool the contains multiple
tablespaces from an Oracle database, looking at the LUNs associated to
that pool they are constantly at 80% - 100% busy. Looking at the output
from arcstat for the miss % on data, prefetch and metadata we are
getting around 5 - 10 % on data,
2007 Sep 11
4
ext3 on zvols journal performance pathologies?
I''ve been seeing read and write performance pathologies with Linux
ext3 over iSCSI to zvols, especially with small writes. Does running
a journalled filesystem to a zvol turn the block storage into swiss
cheese? I am considering serving ext3 journals (and possibly swap
too) off a raw, hardware-mirrored device. Before I do (and I''ll
write up any results) I''d like to know
2008 Apr 18
1
lots of small, twisty files that all look the same
A customer has a zpool where their spectral analysis applications create a ton (millions?) of very small files that are typically 1858 bytes in length. They''re using ZFS because UFS consistently runs out of inodes. I''m assuming that ZFS aggregates these little files into recordsize (128K?) blobs for writes. This seems to go reasonably well amazingly enough. Reads are a
2008 Jun 24
4
zfs send and recordsize
Hi Everyone,
I perform a snapshot and a zfs send on a filesystem with a recordsize
of 16k, and redirect the output to a plain file. Later, I use cat
sentfs | zfs receive otherpool/filesystem. In this case the new
filesystem''s recordsize will be the default 128k again. The other
filesystem attributes (for example atime) are reverted to defaults
too. Okay, I can set these later,
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2008 May 26
2
SNV82: Not enough memory is available, and dom0 cannot be shrunk any further
Hi All,
I am running nevada 79 BFU''ed to 82. The machine is a Ultra 20 with 4GB
memory. I have several Windows XP domU''s configured and registered.
When ever I try to start the fourth domain I get an out of memory exception:
Not enough memory is available, and dom0 cannot be shrunk any further
Each of my domains only uses 256 so I thought there would be sufficient
memory
2007 May 03
5
ZFS vs UFS2 overhead and may be a bug?
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid
says this problem also exists on Solaris hence this email.]
Summary: on ZFS, overhead for reading a hole seems far worse
than actual reading from a disk. Small buffers are used to
make this overhead more visible.
I ran the following script on both ZFS and UF2 filesystems.
[Note that on FreeBSD cat uses a 4k buffer and md5 uses a 1k
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello,
I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols.
# first test is a
2009 Jan 28
11
destroy means destroy, right?
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is ''fs'' really gone?
thx
jake
2009 Jan 20
2
hot spare not so hot ??
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist
for
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar
2007 Apr 30
1
ZFS and Oracle db production deployment
Hello !
Can you please share your experiences with ZFS deployment for Oracle
databases for production usage ?
Why did you choose to deploy the database on ZFS ?
What features of ZFS are you using ?
What tuning was done during ZFS setup ?
How big are the databases ?
Thank you,
Jay
2010 Jan 31
5
server hang with compression on, ping timeouts from remote machine
Hello All,
I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
Whenever I start copying files from Windows onto the ZFS disk, after about 100-200 Mb been copied the server starts to experience freezes. I have iostat running, which freezes as well. Even pings on both of the network adapters are reporting
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I would like to know the blocksize of a particular file. I know the
blocksize for a particular file is decided at creation time, in fuction
of the write size done and the recordsize property of the dataset.
How can I access that information?. Some zdb magic?.
- --
Jesus Cea Avion _/_/ _/_/_/ _/_/_/
jcea at
2008 Jun 10
3
ZFS space map causing slow performance
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of them have recently developed performance issues where the majority of time in an spa_sync() will be spent in the space_map_*() functions. During this time, "zpool iostat" will show 0 writes to disk, while it does hundreds or thousands of small (~3KB) reads each second, presumably reading space map data from
2007 Jul 10
1
ZFS pool fragmentation
I have a huge problem with ZFS pool fragmentation.
I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0
I found workaround for now - changing recordsize - but I want better solution.
The best solution would be a defragmentator tool, but I can see that it is not easy.
When ZFS pool is fragmented then:
1. spa_sync function is