Displaying 20 results from an estimated 900 matches similar to: "RAIDZ v. RAIDZ1"
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2009 May 20
2
zfs raidz questions
Hi there,
i''m building a small NAS with 5x1TB Disks. The disks contains at the moment some data, ntfs as the fs and aren''t a raid.
Now my im wondering if its possible to add the parity later. So that i add step by step one disk to the pool. And when i add the last disk, i enable the parity.
(i have only one another 1 tb disk to backup the files)
Thank you for you replies and
2007 Apr 02
4
Convert raidz
Hi
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch.
Thanks
This message posted from opensolaris.org
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all,
I have a 5 drive RAIDZ volume with data that I''d like to recover.
The long story runs roughly:
1) The volume was running fine under FreeBSD on motherboard SATA controllers.
2) Two drives were moved to a HP P411 SAS/SATA controller
3) I *think* the HP controllers wrote some volume information to the end of
each disk (hence no more ZFS labels 2,3)
4) In its "auto
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone,
As one of the steps of improving my ZFS home fileserver (snv_134) I wanted
to replace a 1TB disk with a newer one of the same vendor/model/size because
this new one has 64MB cache vs. 16MB in the previous one.
The removed disk will be use for backups, so I thought it''s better off to
have a 64MB cache disk in the on-line pool than in the backup set sitting
off-line all
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello,
i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks
right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing
any data.
As far as i know there are two ways to achieve this:
- Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple
redundancy/parity disks. I
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2010 Mar 07
0
strangeness after resilvering disk from raidz1 on disks with no EFI GPTs
I have a zpool of five 1.5TB disks in raidz1. They are on c?t?d?p0 devices -
using the full disk, not any slice or partition, because the pool was
created in zfs-fuse in linux and no partition tables were ever created. (for
the full saga of my move from that to opensolaris, anyone who missed out on
the fun can read the thread
http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg34813.html
2008 Apr 16
0
ZFS raidz1 replacing failing disk
I''m having a serious problem with a customer running a T2000 with ZFS configured as raidz1 with 4 disks, no spare.
The machine is mostly a cyrus imap server and web application server to run the ajax app to email.
Yesterday we had a heavy slow down.
Tomcat runs smoothly, but the imap access is very slow, also through a direct imap client runnining on LAN PCs.
We figured out that the 4th
2011 Nov 09
3
Data distribution not even between vdevs
Hi list,
My zfs write performance is poor and need your help.
I create zpool with 2 raidz1. When the space is to be used up, I add 2
another raidz1 to extend the zpool.
After some days, the zpool is almost full, I remove some old data.
But now, as show below, the first 2 raidz1 vdev usage is about 78% and the
last 2 raidz1 vdev usage is about 93%.
I have line in /etc/system
set
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2).
The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2010 Nov 18
5
RAID-Z/mirror hybrid allocator
Hi, I''m referring to;
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
It should be in Solaris 11 Express, has anyone tried this? How this is supposed to work? Any documentation available?
Yours
Markus Kovero
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Feb 05
12
ZFS Newbie question
I?ve spend a few hours reading through the forums and wiki and honestly my head is spinning. I have been trying to study up on either buying or building a box that would allow me to add drives of varying sizes/speeds/brands (adding more later etc) and still be able to use the full space of drives (minus parity? [not sure if I got the terminology right]) with redundancy. I have found the ?all in
2010 Jan 28
13
ZFS configuration suggestion with 24 drives
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs.
Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. I''m inquiring as to what the best configuration for this is for vdevs. I''m considering the following configurations
4 x x6
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All,
I posted this in a different threat, but it was recommended that I post in this one.
Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives.
I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.