similar to: import degraded pool/recovery tools

Displaying 20 results from an estimated 4000 matches similar to: "import degraded pool/recovery tools"

2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message: Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007 Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2010 Aug 15
2
Is the error threshold for a degraded device configurable?
I look after an x4500 for a client and wee keep getting drives marked as degraded with just over 20 checksum errors. Most of these errors appear to be driver or hardware related and thier frequency increases during a resilver, which can lead to a death spiral. The increase in errors within a vdev during a resilver (I recently had three drives in an 8 drive raidz vdev "degraded")
2013 Mar 23
0
Dirves going offline in Zpool
Hi, I have Dell md1200 connected to two heads ( Dell R710 ). The heads have Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the RAID controller. One of the drives had crashed and is replaced by a spare. Resilvering was triggered but fails to complete due to drives going offline. I have to reboot the head ( R710) and drives comes online. This happened repeatedly when
2009 Aug 21
0
possible resilver bugs
Hi, I don''t have means to replicate this issue nor file a bug about it so I''d like your opinion about these issues or perhaps make bug report if necessary. In scenario where is say three raidz2 groups consisting several disks, two disks fail in different raidz-groups. You have degraded pool and two degraded raidz2 groups. Now, one replaces first disk and starts resilvering, it
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up
2006 Nov 30
0
ZFS caught resilvering when only one side of mirror persent
When I booted my laptop up this morning it took much longer than normal and there was a lot of disk activity even after I logged in. A quick use of dtrace and iostat revealed that all the writes were to the zpool. I ran zpool status and found that the pool was resilvering. Strange thing is that while the pool is a mirror, one side of it is offline since it is on an external usb disk - which
2008 Sep 16
1
Interesting Pool Import Failure
Hello... Since there has been much discussion about zpool import failures resulting in loss of an entire pool, I thought I would illustrate a scenario I just went through to recover a faulted pool that wouldn''t import under Solaris 10 U5. While this is a simple scenario, and the data was not terribly important, I think the exercise should at least give some piece of mind to those who
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it continually restarting, I eliminated all of the snapshot-taking facilities which were enabled and 2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0 2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3 maxtxg=567354 2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2009 Mar 30
3
Data corruption during resilver operation
I''m in well over my head with this report from zpool status saying: root # zpool status z3 pool: z3 state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A
2013 Jan 19
0
zpool errors without fmdump or dmesg errors
Hi all, I am running S11 on a Dell PE650. It has 5 zpools attached that are made out of 240 drives, connected via fibre. On thursday all of the sudden two out of three zpools on one FC channel showed numerous errors and one of them showed this: root at solaris11a:~# zpool status vsmPool01 pool: vsmPool01 state: SUSPENDED status: One or more devices is currently being resilvered. The pool
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2010 Dec 20
3
Resilvering - Scrubing whats the different
Hello All I read this thread Resilver/scrub times? for a few minutes and I have recognize that I dont know the different between Resilvering and Scrubing. Shame on me. :-( I dont find some declarations in the man-pages and I know the command to start scrubing "zpool scrub tank" but what is the command to start resilver and what is the different? -- Best Regards Alexander Dezember, 20
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone, As one of the steps of improving my ZFS home fileserver (snv_134) I wanted to replace a 1TB disk with a newer one of the same vendor/model/size because this new one has 64MB cache vs. 16MB in the previous one. The removed disk will be use for backups, so I thought it''s better off to have a 64MB cache disk in the on-line pool than in the backup set sitting off-line all
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27): Apr 11 16:07:42 thumper-9.srv
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which