similar to: zpool iostat output gets buffered

Displaying 20 results from an estimated 10000 matches similar to: "zpool iostat output gets buffered"

2009 Jan 30
1
RFE: parsable iostat and zpool layout
I would like zpool iostat to take a "-p" option to output parsable statistics with absolute counters/figures that for example could be fed to MRTG, RRD, et al. The "zpool iostat [-v] POOL 60 [N]" is great for humans but not very api-friendly; N=2 is a bit overkill and unreliable. Is this info available in kstat, or is this an RFE candidate? In Solaris10? Ditto for zpool
2010 Mar 11
1
zpool iostat / how to tell if your iop bound
What is the best way to tell if your bound by the number of individual operations per second / random io? "zpool iostat" has an "operations" column but this doesn''t really tell me if my disks are saturated. Traditional "iostat" doesn''t seem to be the greatest place to look when utilizing zfs. Thanks, Chris -------------- next part -------------- An
2005 Nov 17
2
zpool iostat question
Hello ZFSland, Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-) Thanks... Sean. ----- [root at global:/36g2] # zpool iostat 3 capacity operations bandwidth pool used avail read
2008 Jul 05
4
iostat and monitoring
Hi gurus, I like zpool iostat and I like system monitoring, so I setup a script within sma to let me get the zpool iostat figures through snmp. The problem is that as zpool iostat is only run once for each snmp query, it always reports a static set of figures, like so: root at exodus:snmp # zpool iostat -v capacity operations bandwidth pool used avail read
2008 Apr 29
0
zpool attach vs. zpool iostat
Hello zfs-discuss, S10U4+patches, SPARC If I attach a disk to vdev in a pool to get mirrored configuration then during resilver zpool iostat 1 will report only reads being done from pool and basically no writes. If I do zpool iostat -v 1 then I can see it is writing to new device however on a pool and mirror/vdev level it is still reporting only reads. If during resilvering reads
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2007 Apr 15
0
zpool iostat : This command can be tricky ...
I really need to take a longer look here. /* * zpool iostat [-v] [pool] ... [interval [count]] * * -v Display statistics for individual vdevs * * This command can be tricky because we want to be able to deal with pool . . . I think I may need to deal with a raw option here ? /* * Enter the main iostat loop. */ cb.cb_list = list;
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts, I''ve had a lingering question for some time: when I use "zpool iostat -v" the values do not quite sum up. In the example below with a raidz2 array made of 6 drives: * the reported 33K of writes are less than two disks'' workload at this time (at 17.9K each), overall disks writes are 107.4K = 325% of 33K. * write ops sum up to 18 = 225% of 8 ops to
2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2008 Nov 13
4
Zpool mishap
Evening all, I''m new to Solaris but after drooling over zfs for ages I finally took the plunge. First off I had 2x1Tb hdd in raid1 XFS format using mdadm, so using a opensolaris vm image I transfered one side of the mirror to the other in zfs. (using rsync and it took 3days!) So with a 1 disk zpool carrying all my data I brought both the drives over in a new box, all going swimmingly
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0
2011 Jan 18
4
Zpool Import Hanging
Hi All, I believe this has been asked before, but I wasn?t able to find too much information about the subject. Long story short, I was moving data around on a storage zpool of mine and a zfs destroy <filesystem> hung (or so I thought). This pool had dedup turned on at times while imported as well; it?s running on a Nexenta Core 3.0.1 box (snv_134f). The first time the machine was
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message: Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007 Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up
2009 Sep 21
2
Question about iostat output
Hello, We are planning to moving most of our servers to ESX but before buying our SAN, we want to do some I/O stats to see if iSCSI is enough or if we have to go with FC. So I found a plugin for Nagios that can log I/O stats with iostat. So far it's fine with single disk/one partition servers, but on our Oracle Database 10g server, we have two drives in RAID 1 (/dev/sda) and 4 other
2008 Jan 03
1
The iostat command
Hi All, I am learning iostat command to understand disk I/O statistics. We have 2 Centos 4 servers running where oracle is installed.We installed them 2 weeks ago. @ that time, These Servers performed well. But, Now We have come to know that these 2 Machines are quite slow when comparing to the first week. So, Some say, run iostat to see statictics. I am not familiar with comamnd well. I
2011 May 28
7
Have my RMA... Now what??
I have a raidz2 pool with one disk that seems to be going bad, several errors are noted in iostat. I have an RMA for the drive, however - no I am wondering how I proceed. I need to send the drive in and then they will send me one back. If I had the drive on hand, I could do a zpool replace. Do I do a zpool offline? zpool detach? Once I get the drive back and put it in the same drive bay..