Displaying 20 results from an estimated 20 matches for "erickustarz".
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
...'s performance? Or better still, is there any
> kind of benchmark tools for ZFS performance?
filebench sounds like it''d be useful for you. It''s coming in the next Nevada
release, but since it looks like you''re on Solaris 10, take a look at:
http://blogs.sun.com/erickustarz/entry/filebench
Remember to ''zfs set atime=off mypool/cache'' -
there''s no need for it for squid caches.
--
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
...;'zpool iostat -v 1'' that writes are issued to disk only once in 10 secs, and then its 2000rq one sec.
Reads are sustained at cca 800rq/s.
Is there a way to tune this read/write ratio? Is this know problem?
I tried to change vq_max_pending as suggested by Eric in http://blogs.sun.com/erickustarz/entry/vq_max_pending
But no change in this write behaviour.
Iostat shows cca 20-30ms asvc_t, 0%w, and cca 30% busy on all drives so these are not saturated it seems. (before with UTF they had 90%busy, 1%wait).
System is Sol 10 U2, sun x4200, 4GB RAM.
Please if you could give me some hint to real...
2007 May 15
2
Clear corrupted data
Hey,
I''m currently running on Nexenta alpha 6 and I have some corrupted data in a
pool.
The output from sudo zpool status -v data is:
pool: data
> state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption. Applications may be affected.
> action: Restore the file in question if possible. Otherwise restore the
> entire
2007 May 29
6
NCQ performance
I''ve been looking into the performance impact of NCQ. Here''s what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there''s not too much performance data on NCQ available via
a google search ...
enjoy,
eric
2007 Jul 03
1
zpool status -v: machine readable format?
I was wondering if anyone had a script to parse the "zpool status -v" output into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
2007 Feb 03
0
corrupted files and improved ''zpool status -v''
For your reading pleasure:
http://blogs.sun.com/erickustarz/entry/damaged_files_and_zpool_status
eric
2006 Sep 11
1
Looking for common dtrace scripts for NFS top talkers
We started seeing odd behaviour with clients somehow hammering our
ZFS-based NFS server. Nothing is obvious from mpstat/iostat/etc. I''ve
seen mention before of NFSv3 client dtrace scripts, and I was
wondering if there ever was one for the server end, displaying top
talkers, writes/reads, or locations of such to nail down abusive
clients short of using snoop/tcpdump to nail down via
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have
much impact on final speed. [scrubbing, devs reordered for clarity]
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74
sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78
sd6 445.7 0.0
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum,
I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
...he zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem option.
any idea will be greatly appreciate.
For more inofrmation see http://blogs.sun.com/erickustarz/en_US/entry/poor_man_s_cluster_end
bbr
--
This message posted from opensolaris.org
2007 May 23
13
Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.
Hi.
I''m all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I''m ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the
links to disks are the bottleneck, so I''m going to use not more than 4
disks, probably.
2007 May 24
9
No zfs_nocacheflush in Solaris 10?
Hi,
I''m running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable ''zfs_nocacheflush'' is not defined in the ''zfs'' module[/b]
So is this variable not available in the Solaris kernel?
I''m getting really poor
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting
2007 Oct 08
16
Fileserver performance tests
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs pool as a raid 10 by doing something like the following:
[i]zpool create
2007 Jul 07
12
ZFS Performance as a function of Disk Slice
First Post!
Sorry, I had to get that out of the way to break the ice...
I was wondering if it makes sense to zone ZFS pools by disk slice, and if it makes a difference with RAIDZ. As I''m sure we''re all aware, the end of a drive is half as fast as the beginning ([i]where the zoning stipulates that the physical outside is the beginning and going towards the spindle increases hex
2005 Nov 20
11
NFS question (and Best Practices)
I saw in another post that a best practices doc will be coming, but I figured I would try to get this working.
I''m trying to understand why zfs uses so many "zfs create" so I can use it better. What makes sense is that each zfs fs can have it''s own options (compression, nfs, atime, quota, etc). I really love this because it is so tuneable -- compression on these
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 Jan 08
11
NFS and ZFS, a fine combination
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
____________________________________________________________________________________
Performance, Availability & Architecture Engineering
Roch Bourbonnais Sun Microsystems, Icnc-Grenoble
Senior Performance Analyst 180, Avenue De L''Europe, 38330,
Montbonnot Saint
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker.
>From what little I currently understand, the general
2007 Sep 19
53
enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS
We are looking for a replacement enterprise file system to handle storage
needs for our campus. For the past 10 years, we have been happily using DFS
(the distributed file system component of DCE), but unfortunately IBM
killed off that product and we have been running without support for over a
year now. We have looked at a variety of possible options, none of which
have proven fruitful. We are