similar to: zpool iostat / how to tell if your iop bound

Displaying 20 results from an estimated 3000 matches similar to: "zpool iostat / how to tell if your iop bound"

2005 Dec 22
2
zpool iostat output gets buffered
I''m trying to write a SLAMD (http://www.slamd.com/) resource monitor that can be used to measure the I/O throughput on a ZFS pool, and in particular to be able to get the read and write rates. In order to do this, I''m basically executing "zpool iostat {interval}" and parsing the output to capture the values in the "bandwidth read" and "bandwidth
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2008 Dec 17
12
disk utilization is over 200%
Hello, I use Brendan''s sysperfstat script to see the overall system performance and found the the disk utilization is over 100: 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 ------ Utilisation ------ ------ Saturation ------ Time %CPU %Mem %Disk %Net CPU Mem
2005 Nov 17
2
zpool iostat question
Hello ZFSland, Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-) Thanks... Sean. ----- [root at global:/36g2] # zpool iostat 3 capacity operations bandwidth pool used avail read
2011 Jun 01
11
SATA disk perf question
I figure this group will know better than any other I have contact with, is 700-800 I/Ops reasonable for a 7200 RPM SATA drive (1 TB Sun badged Seagate ST31000N in a J4400) ? I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. There is no other I/O activity on this box, as this is a remote replication target for production data. I have a the
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
Hi, I deployed ZFS on our mailserver recently, hoping for eternal peace after running on UFS and moving files witch each TB added. It is mailserver - it''s mdirs are on ZFS pool: capacity operations bandwidth pool used avail read write read write ------------------------- ----- ----- ----- ----- ----- -----
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0
2012 Jul 18
4
asterisk 1.8 on Solaris/sparc
I've got the latest asterisk 1.8 running on a Netra X1 with Solaris 10 u10. The system itself is happy and phone calls (between two parties) seem fine. Unfortunately, when a caller listens to a Playback recording, there seems to be moments of stutter - perhaps 1 second of stutter for every 10 seconds of Playback. The stutter is not consistent at the same point of the playback file. To
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops of which about 90% are meta data. In hind sight it would have been significantly better to use a mirrored configuration but we opted for 4 x (9+2) raidz2 at the time. We can not take the downtime necessary to change the zpool configuration. We need to improve the meta data performance with little to no money. Does anyone
2008 Apr 29
0
zpool attach vs. zpool iostat
Hello zfs-discuss, S10U4+patches, SPARC If I attach a disk to vdev in a pool to get mirrored configuration then during resilver zpool iostat 1 will report only reads being done from pool and basically no writes. If I do zpool iostat -v 1 then I can see it is writing to new device however on a pool and mirror/vdev level it is still reporting only reads. If during resilvering reads
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2009 Jan 30
1
RFE: parsable iostat and zpool layout
I would like zpool iostat to take a "-p" option to output parsable statistics with absolute counters/figures that for example could be fed to MRTG, RRD, et al. The "zpool iostat [-v] POOL 60 [N]" is great for humans but not very api-friendly; N=2 is a bit overkill and unreliable. Is this info available in kstat, or is this an RFE candidate? In Solaris10? Ditto for zpool
2009 Sep 21
2
Question about iostat output
Hello, We are planning to moving most of our servers to ESX but before buying our SAN, we want to do some I/O stats to see if iSCSI is enough or if we have to go with FC. So I found a plugin for Nagios that can log I/O stats with iostat. So far it's fine with single disk/one partition servers, but on our Oracle Database 10g server, we have two drives in RAID 1 (/dev/sda) and 4 other
2007 Apr 15
0
zpool iostat : This command can be tricky ...
I really need to take a longer look here. /* * zpool iostat [-v] [pool] ... [interval [count]] * * -v Display statistics for individual vdevs * * This command can be tricky because we want to be able to deal with pool . . . I think I may need to deal with a raw option here ? /* * Enter the main iostat loop. */ cb.cb_list = list;
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2005 Aug 29
14
Oracle 9.2.0.6 on Solaris 10
How can I tell if this is normal behaviour? Oracle imports are horribly slow, an order of magnitude slower than on the same hardware with a slower disk array and Solaris 9. What I can look for to see where the problem lies? The server is 99% idle right now, with one database running. Each sample is about 5 seconds. I''ve tried setting kernel parameters despite the docs saying that
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts, I''ve had a lingering question for some time: when I use "zpool iostat -v" the values do not quite sum up. In the example below with a raidz2 array made of 6 drives: * the reported 33K of writes are less than two disks'' workload at this time (at 17.9K each), overall disks writes are 107.4K = 325% of 33K. * write ops sum up to 18 = 225% of 8 ops to