Displaying 20 results from an estimated 2000 matches similar to: "Raidz2 slow read speed (under 5MB/s)"
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2010 May 28
0
zpool iostat question
Following is the output of "zpool iostat -v". My question is regarding the datapool row and the raidz2 row statistics. The datapool row statistic "write bandwidth" is 381 which I assume takes into account all the disks - although it doesn''t look like it''s an average. The raidz2 row static "write bandwidth" is 36, which is where I am confused. What
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts,
I''ve had a lingering question for some time: when I
use "zpool iostat -v" the values do not quite sum up.
In the example below with a raidz2 array made of 6
drives:
* the reported 33K of writes are less than two disks''
workload at this time (at 17.9K each), overall
disks writes are 107.4K = 325% of 33K.
* write ops sum up to 18 = 225% of 8 ops to
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a
RAIDZ2 exported over fibre channel) but there''s no such thing as too much
speed, and these other two drive bays are just begging for drives in
them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, or will the extra parity writes reduce speed, or will the two factors
offset and leave things
2010 Feb 25
1
raidz2 array FAULTED with only 1 drive down
I recently had a hard drive die on my 6 drive raidz2 array (4+2). Unfortunately, now that the dead drive didn''t register anymore, Linux decided to rearrange all of the drive names around such that zfs couldn''t figure out what drives went where.
After much hair pulling, I gave up on Linux and went to OpenSolaris, but it wouldn''t recognize my SATA controller, so
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs 915G 49K 915G 1% /zfs
# zpool destroy -f zfs
Plain old raidz (raid-5ish)
# zpool create zfs raidz c0d0
2008 Oct 11
5
questions about replacing a raidz2 vdev disk with a larger one
I''d like to replace/upgrade two 500GB disks in RaidZ2 vdev with 1TB disks, but I have some preliminary questions/concerns before trying ''zfs replace dpool ?''
Will ZFS permit this replacement?
Will ZFS use the extra space in a heterogeneous RaidZ2 vdev, or is the size limited by the smallest disk in the vdev?
Thanks in advance,
Vizzini
The system is currently running
2010 Sep 30
0
ZFS Raidz2 problem, detached drive
I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split into raidz2 sets of 8 - 10 drives within the single pool.
I had a failed disk with i cfgadm unconfigured and replaced no problem, but it wasn''t recognised as a Sun drive in Format and unbeknown to me someone else logged in remotely at the time and issued a zpool replace....
I corrected the system/drive
2008 Feb 21
3
raidz2 resilience on 3 disks
Hello,
1) If i create a raidz2 pool on some disks, start to use it, then the disks''
controllers change. What will happen to my zpool? Will it be lost or is
there some disk tagging which allows zfs to recognise the disks?
2) if i create a raidz2 on 3 HDs, do i have any resilience? If any one of
those drives fails, do i loose everything? I''ve got one such pool and
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello,
Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation.
I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains 2 partitions (p1=32GB, p2=1800 GB) and
- p1 is used as part of a zfs mirror of rpool
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2010 Jul 19
6
Performance advantages of spool with 2x raidz2 vdev"s vs. Single vdev
Hi guys, I am about to reshape my data spool and am wondering what performance diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev''s of 7 disk raidz2 in a single pool.
I understand it should be better with higher io throughput....and better read/write rates...but interested to hear the science
2006 Aug 02
1
raidz -> raidz2
Will it be possible to update an existing raidz to a raidz2? I wouldn''t
think so, but maybe I''ll be pleasantly surprised.
-frank
2010 Oct 19
8
Balancing LVOL fill?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2009 Jan 21
8
cifs perfomance
Hello!
I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal.
CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage:
usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2010 Jan 17
1
raidz2 import, some slices, some not
I am in the middle of converting a FreeBSD 8.0-Release system to OpenSolaris
b130
In order to import my stuff, the only way i knew to make it work (from
testing in virtualbox) was to do this:
label a bunch of drives with an EFI label by using the opensolaris live cd,
then use those drives in FreeBSD to create a zpool.
This worked fine.
(though i did get a warning in freebsd about GPT
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this.
I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134.
The zpool
2010 Nov 29
9
Seagate ST32000542AS and ZFS perf
Hi,
Does anyone use Seagate ST32000542AS disks with ZFS?
I wonder if the performance is not that ugly as with WD Green WD20EARS disks.
Thanks,
--
Piotr Jasiukajtis | estibi | SCA OS0072
http://estseg.blogspot.com
2010 Feb 27
1
slow zfs scrub?
hi all
I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow
scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go
The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)