similar to: Is the error threshold for a degraded device configurable?

Displaying 20 results from an estimated 3000 matches similar to: "Is the error threshold for a degraded device configurable?"

2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2009 Feb 12
1
strange ''too many errors'' msg
Hi, just found on a X4500 with S10u6: fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-GH, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Wed Feb 11 16:03:26 CET 2009 PLATFORM: Sun Fire X4500, CSN: 00:14:4F:20:E0:2C , HOSTNAME: peng SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: 74e6f0ec-b1e7-e49b-8d71-dc1c9b68ad2b DESC: The number of checksum errors associated with a ZFS device exceeded
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2006 Jun 15
4
devid support for EFI partition improved zfs usibility
Hi, guys, I have add devid support for EFI, (not putback yet) and test it with a zfs mirror, now the mirror can recover even a usb harddisk is unplugged and replugged into a different usb port. But there is still something need to improve. I''m far from zfs expert, correct me if I''m wrong. First, zfs should sense the hotplug event. I use zfs status to check the status of the
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao, the rot filesystem of my thumper is a ZFS with a single disk: bash-3.2# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c5t0d0s0 ONLINE 0 0 0 spares c0t7d0 AVAIL c1t6d0 AVAIL c1t7d0
2010 Jul 12
7
How do I clean up corrupted files from zpool status -v?
Hi Folks.. I have a system that was inadvertently left unmirrored for root. We were able to add a mirror disk, resilver, and fix the corrupted files (nothing very interesting was corrupt, whew), but zpool status -v still shows errors.. Will this self correct when we replace the degraded disk and resilver? Or is there something else that I''m not finding that I need to do to clean up?
2010 Apr 24
6
Extremely slow raidz resilvering
Hello everyone, As one of the steps of improving my ZFS home fileserver (snv_134) I wanted to replace a 1TB disk with a newer one of the same vendor/model/size because this new one has 64MB cache vs. 16MB in the previous one. The removed disk will be use for backups, so I thought it''s better off to have a 64MB cache disk in the on-line pool than in the backup set sitting off-line all
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2008 Jul 23
72
The best motherboard for a home ZFS fileserver
I''m a fan of ZFS since I''ve read about it last year. Now I''m on the way to build a home fileserver and I''m thinking to go with Opensolaris and eventually ZFS!! Apart from the other components, the main problem is to choose the motherboard. The offer is incredibly high and I''m lost. Minimum requisites should be: - working well with Open Solaris ;-) -
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2010 Apr 12
5
How to Catch ZFS error with syslog ?
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log server. But this failed disk in ZFS pool did not generate any syslog messages. ZFS diagnosists engine are online as seen bleow. hrs1zgpprd1#
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk