similar to: Pool faulted in a bad way

Displaying 20 results from an estimated 900 matches similar to: "Pool faulted in a bad way"

2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices in another pool I don''t think this is related, since I the pools are ofline pending access to the volumes. I tried running find /dev/zvol/dsk/poolname -type f and here is the stack, hopefully this someone a hint at what the issue is, I have
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
Hi all, Please can you help with my ZFS troubles: I currently have 3 x 400 GB Seagate NL35''s and a 500 GB Samsung Spinpoint in a RAIDZ array that I wish to expand by systematically replacing each drive with a 750 GB Western Digital Caviar. After failing miserably, I''d like to start from scratch again if possible. When I last tried, the replace command hung for an age, network
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What would you do next to try and recover this zfs pool? I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was composed of 4 1.5 TiB disks. One disk is totally dead. Another had SMART errors, but using GNU ddrescue I was able to copy all the data off successfully. I have copied all 3 remaining disks as images using
2010 May 07
0
confused about zpool import -f and export
Hi, all, I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up. I do a successful install, then I boot OK,
2009 Apr 08
2
ZFS data loss
Hi, I have lost a ZFS volume and I am hoping to get some help to recover the information ( a couple of months worth of work :( ). I have been using ZFS for more than 6 months on this project. Yesterday I ran a "zvol status" command, the system froze and rebooted. When it came back the discs where not available. See bellow the output of " zpool status", "format"
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi, more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS Version 2). Everything went fine and I used the pool to store personal stuff on it, like lots of photos and music. (So getting the data back is not time critical, but still important to me.) Later, since the development of the ZFS extension was
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool
2010 Feb 23
1
Help with itadm commands
Hi - I''m trying to create an iscsi targe and go thru the motions of making the following LUN''s available - I am not able to run the command: itadm create-target as I get the following error: bash: itadm: command not found I need to get the following dirve seen by vmware 0. c7t0d0 <Areca-ARC-1260-VOL#00-R001-279.40GB> /pci at 0,0/pci8086,29f1 at 1/pci8086,370 at
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file server in it for learning purposes, and I moved almost all of my data to it. Yesterday, and naturally after no longer having backups of the data in the server, I had a controller failure (SiS 180 (oh, the quality)) and the HDD was considered unplugged. When I noticed a few checksum failures on `zfs status` (including two on
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata: pool: d state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://illumos.org/msg/ZFS-8000-72 scan: none requested config: NAME STATE