similar to: zpool upgrade while some disks are faulted

Displaying 20 results from an estimated 70000 matches similar to: "zpool upgrade while some disks are faulted"

2009 Aug 27
0
How are you supposed to remove faulted spares from pools?
We have a situation where all of the spares in a set of pools have gone into a faulted state and now, apparently, we can''t remove them or otherwise de-fault them. I''m confidant that the underlying disks are fine, but ZFS seems quite unwilling to do anything with the spares situation. (The specific faulted state is ''FAULTED corrupted data'' in ''zpool
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
Customer has a Thumper running: SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc where running "zpool detech disk c6t7d0" to detech a mirror causes zpool command to hang with following kernel stack trace: PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0 stack pointer for thread fffffe84d34b4920: fffffe8001c30c10 [ fffffe8001c30c10 _resume_from_idle+0xf8() ]
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror): [root at einstein;0]~# zpool status poolm pool: poolm state: FAULTED scrub: none requested config: NAME STATE READ WRITE CKSUM poolm UNAVAIL 0 0 0 insufficient replicas mirror UNAVAIL 0 0 0 corrupted data c2t0d0s0 ONLINE 0
2010 Aug 17
4
Narrow escape with FAULTED disks
Nothing like a "heart in mouth moment" to shave tears from your life. I rebooted a snv_132 box in perfect heath, and it came back up with two FAULTED disks in the same vdisk group. Everything an hour on Google I found basically said "your data is gone". All 45Tb of it. A postmortem of fmadm showed a single disk failed with smart predictive failure. No indication why the
2006 Nov 01
0
RAID-Z1 pool became faulted when a disk was removed.
So I have attached to my system two 7-disk SCSI arrays, each of 18.2 GB disks. Each of them is a RAID-Z1 zpool. I had a disk I thought was a dud, so I pulled the fifth disk in my array and put the dud in. Sure enough, Solaris started spitting errors like there was no tomorrow in dmesg, and wouldn''t use the disk. Ah well. Remove it, put the original back in - hey, Solaris still thinks
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list, I posted this a few days ago on opensolaris-discuss@ list I am posting here, because there my be too much noise on other lists I have been without this zfs set for a week now. My main concern at this point,is it even possible to recover this zpool. How does the metadata work? what tool could is use to rebuild the corrupted parts or even find out what parts are corrupted. most but
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all, I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command: # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-sized disks). I thought one of the disks might have been to blame, so I tried swapping it out
2010 Feb 24
0
disks in zpool gone at the same time
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it started (/var/adm/messages) Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /pci at 0,0/pci10de,5d at
2010 Jan 03
2
"zpool import -f" not forceful enough?
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an old zpool. I made this change: /* zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); */ zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, &zc); and I''m assuming [0] is the correct endianness, since afterwards I saw it come up with "zpool import". Unfortunately, I
2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all, IHACWHAC (I have a colleague who has a customer - hello, if you''re listening :-) who''s trying to build and test a scenario where he can salvage the data off the (internal ?) disks of a T2000 in case the sysboard and with it the on-board raid controller dies. If I understood correctly, he replaces the motherboard, does some magic to get the raid config back, but even
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi, more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS Version 2). Everything went fine and I used the pool to store personal stuff on it, like lots of photos and music. (So getting the data back is not time critical, but still important to me.) Later, since the development of the ZFS extension was
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I''d ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some behind a 6920 DSP engine). There are many LUNs, all about 500 GB and mirrored via ZFS. The LUNs
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2008 Jan 11
4
zpool remove problem
I have a pool with 3 partitions in it. However, one of them is no longer valid, the disk was removed and modified so that the original partition is no longer available. I cannot get zpool to remove it from the pool. How do I tell zfs to take this item out of the pool if not with "zfs remove" ? Thanks, Wyllys here is my pool: zpool status pool: bigpool state: FAULTED status:
2008 Jul 23
0
where was zpool status information keeping.
the os ''s / first is on mirror /dev/dsk/c1t0d0s0 and /dev/dsk/c1t1d0s0, and then created home_pool using mirror, here is the mirror information. pool: omp_pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM omp_pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t3d0s0 ONLINE
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All, over the last couple of weeks, I had to boot from my rpool from various physical machines because some component on my laptop mainboard blew up (you know that burned electronics smell?). I can''t retrospectively document all I did, but I am sure I recreated the boot-archive, ran devfsadm -C and deleted /etc/zfs/zpool.cache several times. Now zpool status is referring to a
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message: Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007 Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2012 Jan 08
0
Pool faulted in a bad way
Hello, I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2009 Apr 28
1
zfs-fuse mirror unavailable after upgrade to ubuntu 9.04
Hi there, juliusr at rainforest:~$ cat /etc/issue Ubuntu 9.04 \n \l juliusr at rainforest:~$ dpkg -l | grep -i zfs-fuse ii zfs-fuse 0.5.1-1ubuntu5 I have two 320gb sata disks connected to a PCI raid controller: juliusr at rainforest:~$ lspci | grep -i sata 00:08.0 RAID bus controller: Silicon Image, Inc. SiI 3512 [SATALink/SATARaid] Serial ATA Controller (rev