Displaying 20 results from an estimated 2000 matches similar to: "Help! ZFS pool is UNAVAILABLE"
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750) for a total of 12 disks. Every disk has its own eSATA
cable
connected to the ports on the PCI-X
2006 Jul 18
1
file access algorithm within pools
Hello,
What is the access algorithm used within multi-component pools for a
given pool, and does it change when one or more members of the pool
become degraded ?
examples:
zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror
c5t0d0 c6t0d0
or;
zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0
As files are created on the filesystem within these pools,
2010 Jan 10
2
possible to remove a mirror pair from a zpool?
Suppose the requirements for storage shrink ( it can happen ) is it
possible to remove a mirror set from a zpool?
Given this :
# zpool status array03
pool: array03
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ''zpool upgrade''. Once this is
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start:
Status immediately after starting resilver:
# zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts:
A customer has X4500 and the boot drives mirrored (c5t0d0s0 and
c5t4d0s0) by SVM,
The ZFS uses the two other partitions on these two drives(c5t0d0s3 and
c5t4d0s3).
If we need to replace the disk drive c5t0d0, do we need to do anything
on the ZFS
(c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive
replacement procedure?
Below is the summary of their current ZFS
2008 Sep 16
1
Interesting Pool Import Failure
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn''t import under
Solaris 10 U5. While this is a simple scenario, and the data was not
terribly important, I think the exercise should at least give some piece of
mind to those who
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All,
I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all,
I have a 5 drive RAIDZ volume with data that I''d like to recover.
The long story runs roughly:
1) The volume was running fine under FreeBSD on motherboard SATA controllers.
2) Two drives were moved to a HP P411 SAS/SATA controller
3) I *think* the HP controllers wrote some volume information to the end of
each disk (hence no more ZFS labels 2,3)
4) In its "auto
2008 Oct 15
29
HELP! SNV_97,98,99 zfs with iscsitadm and VMWare!
I''m not sure if this is a problem with the iscsitarget or zfs. I''d greatly appreciate it if it gets moved to the proper list.
Well I''m just about out of ideas on what might be wrong..
Quick history:
I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found out that multilun''s were being treated as multipaths so waited till SNV_94 came out to
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos.
Regards
Victor
--
This message posted from opensolaris.org
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs 915G 49K 915G 1% /zfs
# zpool destroy -f zfs
Plain old raidz (raid-5ish)
# zpool create zfs raidz c0d0