Displaying 20 results from an estimated 2000 matches similar to: "Assistance needed expanding RAIDZ with larger drives"
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2012 Jan 08
0
Pool faulted in a bad way
Hello,
I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2011 Jan 04
0
zpool import hangs system
Hello,
I''ve been using Nexentastore Community Edition with no issues now for a
while now, however last week I was going to rebuild a different system so I
started to copy all the data off that to my to a raidz2 volume on me CE
system. This was going fine until I noticed that they copy was stalled, as
well as the entire system was non-responsive. I let it sit for several hours
with no
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a
replacement that as far as i can tell should be adequate.
a history: this could well be some bizarro edge case, as the pool doesn''t
have the cleanest lineage. initial creation happened on NexentaCP inside
vmware in linux. i had given the virtual machine raw device access to 4
500gb drives and 1 ~200gb
2009 Apr 08
2
ZFS data loss
Hi,
I have lost a ZFS volume and I am hoping to get some help to recover the
information ( a couple of months worth of work :( ).
I have been using ZFS for more than 6 months on this project. Yesterday
I ran a "zvol status" command, the system froze and rebooted. When it
came back the discs where not available.
See bellow the output of " zpool status", "format"
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks...
May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6):
May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3):
May 9 16:47:43 sol timeout: abort request, target=0
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi,
more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB
HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS
Version 2). Everything went fine and I used the pool to store personal
stuff on it, like lots of photos and music. (So getting the data back is
not time critical, but still important to me.)
Later, since the development of the ZFS extension was
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have tried a couple of the commands (zpool import -f and zpool import -FX
llift)
root at
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs 915G 49K 915G 1% /zfs
# zpool destroy -f zfs
Plain old raidz (raid-5ish)
# zpool create zfs raidz c0d0
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few