Displaying 20 results from an estimated 8000 matches similar to: "vanished ZFS pool"
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All,
I posted this in a different threat, but it was recommended that I post in this one.
Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives.
I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list,
I posted this a few days ago on opensolaris-discuss@ list
I am posting here, because there my be too much noise on other lists
I have been without this zfs set for a week now.
My main concern at this point,is it even possible to recover this zpool.
How does the metadata work? what tool could is use to rebuild the
corrupted parts
or even find out what parts are corrupted.
most but
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
I''m seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of "zpool status" shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn''t expected behaviour is it? When I create a mirrored volume in ZFS everything is as one would expect the pool is the size of a single drive.
My setup:
Compaq
2006 Jul 03
8
[raidz] file not removed: No space left on device
On a system still running nv_30, I''ve a small RaidZ filled to the brim:
2 3 root at mir pts/9 ~ 78# uname -a
SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP
0 3 root at mir pts/9 ~ 50# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mirpool1 33.6G 0 137K /mirpool1
mirpool1/home 12.3G 0 12.3G /export/home
mirpool1/install 12.9G
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2010 Mar 27
4
Mixed ZFS vdev in same pool.
I have a question about using mixed vdev in the same zpool and what the community opinion is on the matter. Here is my setup:
I have four 1TB drives and two 500GB drives. When I first setup ZFS I was under the assumption that it does not really care much on how you add devices to the pool and it assumes you are thinking things through. But when I tried to create a pool (called group) with four
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error:
zpool status -v
pool: local
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2013 Mar 20
11
System started crashing hard after zpool reconfigure and OI upgrade
I have two identical Supermicro boxes with 32GB ram. Hardware details at
the end of the message.
They were running OI 151.a.5 for months. The zpool configuration was one
storage zpool with 3 vdevs of 8 disks in RAIDZ2.
The OI installation is absolutely clean. Just next-next-next until done.
All I do is configure the network after install. I don''t install or enable
any other services.
2007 Aug 09
5
Unremovable file in ZFS filesystem.
I managed to create a link in a ZFS directory that I can''t remove. Session as follows:
# ls
bayes.lock.router.3981 bayes_journal user_prefs
# ls -li bayes.lock.router.3981
bayes.lock.router.3981: No such file or directory
# ls
bayes.lock.router.3981 bayes_journal user_prefs
# /usr/sbin/unlink bayes.lock.router.3981
unlink: No such file or directory
# find . -print
2008 Jun 07
4
Mixing RAID levels in a pool
Hi,
I had a plan to set up a zfs pool with different raid levels but I ran
into an issue based on some testing I''ve done in a VM. I have 3x 750
GB hard drives and 2x 320 GB hard drives available, and I want to set
up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to
the same pool.
I tested detaching a drive and it seems to seriously mess up the
entire pool and I
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2).
The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2006 Mar 03
1
zfs? page faults
Hi,
I have setup an old Compaq Proliant DL580 with 2 xeon @700MHz 2Gb RAM
two SmartArray 5300 controllers and 12 drives in an array enclosure.
I am running the latest opensolaris update bfu''ed from binaries since I
could not build from source. I am controlling the drives with the
cpqary3 driver (Solaris 10) from HP.
Initially the array had 7 drives and I created a raidz zfs pool
2012 Jan 08
0
Pool faulted in a bad way
Hello,
I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations>
says that the number of disks in a RAIDZ should be (N+P) with
N = {2,4,8} and P = {1,2}.
But if you go down the page just a little further to the thumper
configuration examples, none of the 3 examples follow this recommendation!
I will have 10 disks to put into a
2010 Nov 08
8
Any limit on pool hierarchy?
Folks,
>From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs.
Is my understanding correct? Also, is there a limit on the depth of a vdev?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out