Displaying 20 results from an estimated 3000 matches similar to: "zpool attach vs. zpool iostat"
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss,
If you do ''zpool create -f test A B C spare D E'' and D or E contains
UFS filesystem then despite of -f zpool command will complain that
there is UFS file system on D.
workaround: create a test pool with -f on D and E, destroy it and
that create first pool with D and E as hotspares.
I''ve tested it on s10u3 + patches - can someone confirm
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2007 Apr 24
2
software RAID vs. HW RAID - part III
Hello zfs-discuss,
http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-part-iii.html
--
Best regards,
Robert Milkowski mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell
makes zpool resilvering so slow? I''m running OpenSolaris 2009.06.
I have had a large number of problematic disks due to a bad production
batch, leading me to resilver quite a few times, progressively
replacing each disk as it dies (and now preemptively removing disks.)
My complaint is that resilvering ends up
2007 Mar 05
3
How to interrupt a zpool scrub?
Dear all
Is there a way to stop a running scrub on a zfs pool? Same question applies to a running resilver.
Both render our fileserver unusable due to massive CPU load so we''d like to postpone them.
In the docs it says that resilvering and scrubbing survive a reboot, so I am not even sure if a reboot would stop scrubbing or resilvering.
Any help greatly appreciated!
Cheers, Thomas
2007 Sep 18
3
ZFS and encryption
Hello zfs-discuss,
I wonder if ZFS will be able to take any advantage of Niagara''s
built-in crypto?
--
Best regards,
Robert Milkowski mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2013 Mar 23
0
Dirves going offline in Zpool
Hi,
I have Dell md1200 connected to two heads ( Dell R710 ). The heads have
Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the
RAID controller.
One of the drives had crashed and is replaced by a spare. Resilvering was
triggered but fails to complete due to drives going offline. I have to
reboot the head ( R710) and drives comes online. This happened repeatedly
when
2005 Dec 22
2
zpool iostat output gets buffered
I''m trying to write a SLAMD (http://www.slamd.com/) resource monitor
that can be used to measure the I/O throughput on a ZFS pool, and in
particular to be able to get the read and write rates. In order to do
this, I''m basically executing "zpool iostat {interval}" and parsing the
output to capture the values in the "bandwidth read" and "bandwidth
2009 Jan 30
1
RFE: parsable iostat and zpool layout
I would like zpool iostat to take a "-p" option to output parsable statistics with absolute counters/figures that for example could be fed to MRTG, RRD, et al.
The "zpool iostat [-v] POOL 60 [N]" is great for humans but not very api-friendly; N=2 is a bit overkill and unreliable. Is this info available in kstat, or is this an RFE candidate? In Solaris10?
Ditto for zpool
2006 Nov 30
0
ZFS caught resilvering when only one side of mirror persent
When I booted my laptop up this morning it took much longer than normal
and there was a lot of disk activity even after I logged in.
A quick use of dtrace and iostat revealed that all the writes were to
the zpool.
I ran zpool status and found that the pool was resilvering.
Strange thing is that while the pool is a mirror, one side of it is
offline since it is on an external usb disk - which
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi,
OSOL, b118
> milek at r600:~# flowadm show-flow
> FLOW LINK IPADDR PROTO PORT
> DSFLD
> local_25 iwh0 -- tcp 25 --
> local_22 iwh0 -- tcp 22 --
> milek at r600:~# flowadm show-flow -s -i 1
> FLOW IPACKETS RBYTES IERRORS
2010 Mar 11
1
zpool iostat / how to tell if your iop bound
What is the best way to tell if your bound by the number of individual
operations per second / random io? "zpool iostat" has an "operations" column
but this doesn''t really tell me if my disks are saturated. Traditional
"iostat" doesn''t seem to be the greatest place to look when utilizing zfs.
Thanks,
Chris
-------------- next part --------------
An
2007 Apr 15
0
zpool iostat : This command can be tricky ...
I really need to take a longer look here.
/*
* zpool iostat [-v] [pool] ... [interval [count]]
*
* -v Display statistics for individual vdevs
*
* This command can be tricky because we want to be able to deal with pool
.
.
.
I think I may need to deal with a raw option here ?
/*
* Enter the main iostat loop.
*/
cb.cb_list = list;
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts,
I''ve had a lingering question for some time: when I
use "zpool iostat -v" the values do not quite sum up.
In the example below with a raidz2 array made of 6
drives:
* the reported 33K of writes are less than two disks''
workload at this time (at 17.9K each), overall
disks writes are 107.4K = 325% of 33K.
* write ops sum up to 18 = 225% of 8 ops to
2008 Sep 05
6
resilver speed.
Is there any way to control the resliver speed? Having attached a third disk to a mirror (so I can replace the other disks with larger ones) the resilver goes at a fraction of the speed of the same operation using disk suite. However it still renders the system pretty much unusable for anything else.
So I would like to control the rate of the resilver. Either slow it down a lot so that the
2005 Nov 17
2
zpool iostat question
Hello ZFSland,
Is there any significance in the fact that the bandwidth/read figures for a simple cpio into a ZFS filesystem should be multiples of 21.3K (when non-zero) as follows? What could determine this figure? Do I need to read a manpage? ;-)
Thanks... Sean.
-----
[root at global:/36g2] # zpool iostat 3
capacity operations bandwidth
pool used avail read