similar to: How to interrupt a zpool scrub?

Displaying 20 results from an estimated 4000 matches similar to: "How to interrupt a zpool scrub?"

2010 Dec 20
3
Resilvering - Scrubing whats the different
Hello All I read this thread Resilver/scrub times? for a few minutes and I have recognize that I dont know the different between Resilvering and Scrubing. Shame on me. :-( I dont find some declarations in the man-pages and I know the command to start scrubing "zpool scrub tank" but what is the command to start resilver and what is the different? -- Best Regards Alexander Dezember, 20
2009 Jul 10
5
Slow Resilvering Performance
I know this topic has been discussed many times... but what the hell makes zpool resilvering so slow? I''m running OpenSolaris 2009.06. I have had a large number of problematic disks due to a bad production batch, leading me to resilver quite a few times, progressively replacing each disk as it dies (and now preemptively removing disks.) My complaint is that resilvering ends up
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it continually restarting, I eliminated all of the snapshot-taking facilities which were enabled and 2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0 2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3 maxtxg=567354 2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2009 Nov 22
9
Resilver/scrub times?
Hi all! I''ve decided to take the "big jump" and build a ZFS home filer (although it might also do "other work" like caching DNS, mail, usenet, bittorent and so forth). YAY! I wonder if anyone can shed some light on how long a pool scrub would take on a fairly decent rig. These are the specs as-ordered: Asus P5Q-EM mainboard Core2 Quad 2.83 GHZ 8GB DDR2/80 OS: 2 x
2008 Aug 03
1
Scrubbing only checks used data?
Hi there, I am currently evaluating OpenSolaris as a replacement for my linux installations. I installed it as a xen domU, so there is a remote chance, that my observations are caused by xen. First, my understanding of "zpool [i]scrub[/i]" is "Ok, go ahead, and rewrite [b]each block of each device[/b] of the zpool". Whereas "[i]resilvering[/i]" means "Make
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time
2008 Jan 23
4
Synchronous scrub?
Say I''m firing off an at(1) or cron(1) job to do scrubs, and say I want to scrub two pools sequentially because they share one device. The first pool, BTW, is a mirror comprising of a smaller disk and a subset of a larger disk. The other pool is the remainder of the larger disk. I see no documentation mentioning how to scrub, then wait-until-completed. I''m happy to be pointed
2008 Sep 05
3
Snapshots during a scrub
I have a weekly scrub setup, and I''ve seen at least once now where it says "don''t snapshot while scrubbing" Is this a data integrity issue, or will it make one or both of the processes take longer? Thanks
2009 Jul 13
7
OpenSolaris 2008.11 - resilver still restarting
Just look at this. I thought all the restarting resilver bugs were fixed, but it looks like something odd is still happening at the start: Status immediately after starting resilver: # zpool status pool: rc-pool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine
2010 Oct 20
5
Myth? 21 disk raidz3: "Don''t put more than ___ disks in a vdev"
In a discussion a few weeks back, it was mentioned that the Best Practices Guide says something like "Don''t put more than ___ disks into a single vdev." At first, I challenged this idea, because I see no reason why a 21-disk raidz3 would be bad. It seems like a good thing. I was operating on assumption that resilver time was limited by sustainable throughput of disks, which
2006 Oct 26
2
experiences with zpool errors and glm flipouts
Tonight I''ve been moving some of my personal data around on my desktop system and have hit some on-disk corruption. As you may know, I''m cursed, and so this had a high probability of ending badly. I have two SCSI disks and use live upgrade, and I have a partition, /aux0, where I tend to keep personal stuff. This is on an SB2500 running snv_46. The upshot is that I have a slice
2010 Apr 14
1
Checksum errors on and after resilver
Hi all, I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults. Lets start with some specs. OSOL 2009.06 Intel SASUC8i (w LSI 1.30IT FW) Gigabyte
2010 Nov 01
6
Excruciatingly slow resilvering on X4540 (build 134)
Hello, I''m working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168 hours, that put completion at sometime tomorrow night. However, he just reported zpool status shows:
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subsequent data migration went fine. However, when I attempted to attach the second side mirrors as a mirror of the ZFS pool, all
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2009 Oct 14
14
ZFS disk failure question
So, my Areca controller has been complaining via email of read errors for a couple days on SATA channel 8. The disk finally gave up last night at 17:40. I got to say I really appreciate the Areca controller taking such good care of me. For some reason, I wasn''t able to log into the server last night or in the morning, probably because my home dir was on the zpool with the failed disk
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has