I had several pool corruptions on my test box recently, and recovery imports
did inded take a large part of a week (the process starved my 8Gb RAM
so the system hanged and had to be reset with the hardware reset button -
and this contributed to the large timeframe). Luckily for me, these import
attempts were cumulative, so after a while the system began working.
It seems that the system crashed during a major deletion operation and
needed more time to find and release the deferred-free blocks.
Not sure if my success would apply to your situation though.
iostat speeds can vary during pool maintenance operations (i.e. scrub
and probably import and zdb walks too) depending on (metadata)
fragmentation, CPU busy-ness, etc. A more relevant metric here
is %busy for the disks.
While researching my problem I found many older posts indicating that
this is "normal", however setting some kernel values with mdb may help
speed up the process and/or have it succeed.
To be short here, I can suggest that you read my recent threads from
that timeframe:
* (OI_148a) ZFS hangs when accessing the pool. How to trace what''s
happening?
http://opensolaris.org/jive/thread.jspa?messageID=515689
* Questions on ZFS pool as a volume in another ZFS pool - details my
system''s setup
http://opensolaris.org/jive/thread.jspa?threadID=138604&tstart=0
Since the system froze often by dropping into swap-hell, I had to
create a watchdog which would initiate an ungraceful reboot if the
conditions were "right".
My FreeRAM-Watchdog code and compiled i386 binary and a
primitive SMF service wrapper can be found here:
http://thumper.cos.ru/~jim/freeram-watchdog-20110531-smf.tgz
Other related forum threads:
* zpool import hangs indefinitely (retry post in parts; too long?)
http://opensolaris.org/jive/thread.jspa?threadID=131237
* zpool import hangs
http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15
----- Original Message -----
From: Christian Becker <christian.freisen at googlemail.com>
Date: Tuesday, May 31, 2011 18:02
Subject: [zfs-discuss] Importing Corrupted zpool never ends
To: zfs-discuss at opensolaris.org
> Hi There,
> I need to import an corrupted ZPOOL after double-Crash (Mainboard and one
HDD) on a different system.
> It is a RAIDZ1 - 3 HDDs - only 2 are working.
>
> Problem: spool import -f poolname runs and runs and runs. Looking after
iostat (not zpool iostat) it is doing something - but what? And why does it last
so long (2x 1.5TB - Atom System).
>
> iostat seems to read and write with something about 500kB/s - I hope that
it doesn''t work through the whole 1500GB - that would need 40 Days...
>
> Hope someone could help me.
>
> Thanks allot
> Chris
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
+============================================================+
| |
| ?????? ???????, Jim Klimov |
| ??????????? ???????? CTO |
| ??? "??? ? ??" JSC COS&HT |
| |
| +7-903-7705859 (cellular) mailto:jimklimov at cos.ru |
| CC:admin at cos.ru,jimklimov at gmail.com |
+============================================================+
| () ascii ribbon campaign - against html mail |
| /\ - against microsoft attachments |
+============================================================+
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110531/f986e626/attachment-0001.html>