Displaying 20 results from an estimated 800 matches similar to: "Un/Expected ZFS performance?"
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest
recommended version and am seeing radically difference performance
when testing with iozone than I did in February of 2008. I am using
Solaris 10 U5 with all the latest patches.
This is the performance achieved (on a 32GB file) in February last
year:
KB reclen write rewrite read reread
33554432
2008 Mar 26
0
different read i/o performance for equal guests
Hello,
I''m using Xen 3.0 in a Debian Linux Etch / Dell PowerEdge 860 / 4GB
RAM / Pentium 4 Dual Core 3Ghz. The machine is using a RAID Controller
SAS 5iR, configured with two 500GB disks in RAID-1 (mirroring). I was
getting I/O throughput problems, but then I''ve searched the Internet
and find a solution saying that I needed to enable the write cache on
the RAID controller. Well,
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi,
I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b]
So is this variable not available in the Solaris kernel?
I''m getting really poor
2008 Dec 14
1
Is that iozone result normal?
5-nodes server and 1 node client are connected by gigabits Ethernet.
#] iozone -r 32k -r 512k -s 8G
KB reclen write rewrite read reread read write
read rewrite read fwrite frewrite fread freread
8388608 32 10559 9792 62435 62260
8388608 512 63012 63409 63409 63138
It seems 32k write/rewrite performance are very
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.
2008 Jul 30
2
zfs_nocacheflush
A question regarding zfs_nocacheflush:
The Evil Tuning Guide says to only enable this if every device is
protected by NVRAM.
However, is it safe to enable zfs_nocacheflush when I also have
local drives (the internal system drives) using ZFS, in particular if
the write cache is disabled on those drives?
What I have is a local zfs pool from the free space on the internal
drives, so I''m
2008 Dec 02
1
zfs_nocacheflush, nvram, and root pools
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
hi,
i have a system connected to an external DAS (SCSI) array, using ZFS. the
array has an nvram write cache, but it honours SCSI cache flush commands by
flushing the nvram to disk. the array has no way to disable this behaviour. a
well-known behaviour of ZFS is that it often issues cache flush commands to
storage in order to ensure data
2008 Jan 30
18
ZIL controls in Solaris 10 U4?
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state storage, but I fear I can''t do it until I
have another update. Heck, I would be happy to just be able to turn the
ZIL off to see how my NFS on ZFS performance is effected before spending
the $''s. Anyone
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote:
> On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote:
>> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote:
>> > Btrfs cannot handle having logically non-contiguous requests submitted. For
>> > example if you have
>> >
>> > Logical: [0-4095][HOLE][8192-12287]
2011 Dec 08
0
folder with no permissions
Hi Matt,
Can you please provide us with more information?
1. what version of glusterfs are you using
2. Was the iozone run as root or user?
a. if user, did it have the required permissions?
3. steps to reproduce the problem
4. Any other errors related to stripe in the clinet log?
With regards,
Shishir
________________________________________
From: gluster-users-bounces at gluster.org
2007 Nov 19
0
Solaris 8/07 Zfs Raidz NFS dies during iozone test on client host
Hi,
Well I have a freshly built system with ZFS raidz.
Intel P4 2.4 Ghz
1GB Ram
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
(2) Intel Dual Port 1Gbit nics
I have (5) 300GB disks in a Raidz1 with Zfs.
I''ve created a couple of FS on this.
/export/downloads
/export/music
/export/musicraw
I''ve shared these out as well.
First with ZFS ''zfs
2010 May 25
0
Magic parameter "-ec" of IOZone to increase the write performance of samba
Hi,
I am measuring the performance of my newly bought NAS with IOZone.
The NAS is of an embedded linux with samba installed. (CPU is Intel Atom)
The IOZone reported that write performance to be over 1GBps while the file
size less or equals to 1GB.
Since the nic is 1Gbps, the maximum speed is supposed to be 125MiBps at
most.
The testing report of IOZone is amazing.
Later I found that If the
2010 Jul 06
0
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi Jeff,
On 07/03/2010 03:58 AM, Jeff Moyer wrote:
> Hi,
>
> Running iozone or fs_mark with fsync enabled, the performance of CFQ is
> far worse than that of deadline for enterprise class storage when dealing
> with file sizes of 8MB or less. I used the following command line as a
> representative test case:
>
> fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s
2017 Sep 11
0
3.10.5 vs 3.12.0 huge performance loss
Here are my results:
Summary: I am not able to reproduce the problem, IOW I get relatively
equivalent numbers for sequential IO when going against 3.10.5 or 3.12.0
Next steps:
- Could you pass along your volfile (both for a brick and also the
client vol file (from
/var/lib/glusterd/vols/<yourvolname>/patchy.tcp-fuse.vol and a brick vol
file from the same place)
- I want to check
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all,
Can anyone please steer me in the right direction with this one? I've
searched the net, but couldn't find a clear answer.
How do I actually generate graphs from iozone, using OpenOffice? Every
website I've been to simply mentions that iozone can output an xls
file which can be used in MS Excel to generate a 3D graph. But, I
can't see how it's actually done. Can anyone
2007 Sep 28
4
Sun 6120 array again
Greetings,
Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We''re about to
reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable
zfs_nocacheflush is not going to serve us well (there is a ZFS
2011 Jan 24
0
ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)
Greetings Gentlemen,
I''m currently testing a new setup for a ZFS based storage system with
dedup enabled. The system is setup on OI 148, which seems quite stable
w/ dedup enabled (compared to the OpenSolaris snv_136 build I used
before).
One issue I ran into, however, is quite baffling:
With iozone set to 32 threads, ZFS''s ARC seems to consume all available
memory, making
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello:
Sorry for asking iozone ques in this mailing list but couldn't find
any mailing list on iozone...
In IOZone, is there a way to configure # of outstanding requests
client sends to server side? Something on the lines of IOMeter option
"Number of outstanding requests".
Thanks a lot!
2008 Feb 19
1
ZFS and small block random I/O
Hi,
We''re doing some benchmarking at a customer (using IOzone) and for some
specific small block random tests, performance of their X4500 is very
poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically,
the test is the IOzone multithreaded throughput test of an 8GB file size
and 8KB record size, with the server physmem''d to 2GB.
I noticed a couple of peculiar