This system is running stock 111b runinng on an Intel Atom D945GCLF2
motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system
was locked up, rebooted and the pool status shows as follows:
pool: atomfs
state: FAULTED
status: An intent log record could not be read.
Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run ''zpool
online'',
or ignore the intent log records by running ''zpool
clear''.
see: http://www.sun.com/msg/ZFS-8000-K4
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
atomfs FAULTED 0 0 1 bad intent log
mirror DEGRADED 0 0 6
c8d0 DEGRADED 0 0 6 too many errors
c9d0 DEGRADED 0 0 6 too many errors
--
This message posted from opensolaris.org
Also, I tried to run zpool clear, but the system crashes and reboots. -- This message posted from opensolaris.org
On Sat, Mar 20, 2010 at 9:19 PM, Patrick Tiquet <ptiquet at gmail.com> wrote:> Also, I tried to run zpool clear, but the system crashes and reboots.Please see if this link helps http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view -- Sriram ------------- Belenix: www.belenix.org
>>>>> "sn" == Sriram Narayanan <sriram at belenix.org> writes:sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view yeah, but he has no slog, and he says ''zpool clear'' makes the system panic and reboot, so even from way over here that link looks useless. Patrick, maybe try a newer livecd from genunix.org like b130 or later and see if the panic is fixed so that you can import/clear/export the pool. The new livecd''s also have ''zpool import -F'' for Fix Harder (see manpage first). Let us know what happens. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100320/46154057/attachment.bin>
On Sun, Mar 21, 2010 at 12:32 AM, Miles Nordin <carton at ivy.net> wrote:>>>>>> "sn" == Sriram Narayanan <sriram at belenix.org> writes: > > ? ?sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view > > yeah, but he has no slog, and he says ''zpool clear'' makes the system > panic and reboot, so even from way over here that link looks useless. > > Patrick, maybe try a newer livecd from genunix.org like b130 or later > and see if the panic is fixed so that you can import/clear/export the > pool. ?The new livecd''s also have ''zpool import -F'' for Fix Harder > (see manpage first). ?Let us know what happens. >Yes, I realized that after I posted to the list, and I replied again asking him to use the opensolaris LiveCD. I just noticed that I replied direct rather than to the list. -- Sriram ------------- Belenix: www.belenix.org
Thanks for the info. I''ll try the live CD method when I have access to the system next week. -- This message posted from opensolaris.org
I tried booting with b134 to attempt to recover the pool. I attempted with one
disk of the mirror. Zpool tells me to use -F for import, fails, but then tells
me to use -f, which also fails and tells me to use -F again. Any thoughts?
jack at opensolaris:~# zpool import
pool: atomfs
id: 13446953150000736882
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
atomfs FAULTED corrupted data
mirror-0 FAULTED corrupted data
c4t5d0 ONLINE
c9d0 UNAVAIL cannot open
jack at opensolaris:~# zpool import -f
pool: atomfs
id: 13446953150000736882
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
atomfs FAULTED corrupted data
mirror-0 FAULTED corrupted data
c4t5d0 ONLINE
c9d0 UNAVAIL cannot open
jack at opensolaris:~# zpool import -f 13446953150000736882 newpool
cannot import ''atomfs'' as ''newpool'': one or
more devices is currently unavailable
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of March 12, 2010 09:08:29 AM PST
should correct the problem. Recovery can be attempted
by executing ''zpool import -F atomfs''. A scrub of the pool
is strongly recommended after recovery.
jack at opensolaris:~# zpool import -F atomfs
cannot import ''atomfs'': pool may be in use from other system,
it was last accessed by blue (hostid: 0x82aa00) on Fri Mar 12 09:08:29 2010
use ''-f'' to import anyway
jack at opensolaris:~# zpool status
no pools available
jack at opensolaris:~# zpool import -f 13446953150000736882
cannot import ''atomfs'': one or more devices is currently
unavailable
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of March 12, 2010 09:08:29 AM PST
should correct the problem. Recovery can be attempted
by executing ''zpool import -F atomfs''. A scrub of the pool
is strongly recommended after recovery.
--
This message posted from opensolaris.org
On Fri, 2 Apr 2010, Patrick Tiquet wrote:> I tried booting with b134 to attempt to recover the pool. I > attempted with one disk of the mirror. Zpool tells me to use -F for > import, fails, but then tells me to use -f, which also fails and > tells me to use -F again. Any thoughts?It looks like it wants you to use both -f and -F at the same time. I don''t see that you tried that. Good luck. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Thanks, that worked!! It needed "-Ff" The pool has been recovered with minimal loss in data. -- This message posted from opensolaris.org
Patrick, I''m happy that you were able to recover your pool. Your original zpool status says that this pool was last accessed on another system, which I believe is what caused of the pool to fail, particularly if it was accessed simultaneously from two systems. It is important that the cause of the original pool failure is identified to prevent it from happening again. This rewind pool recovery is a last-ditch effort and might not recover all broken pools. Thanks, Cindy On 04/02/10 12:32, Patrick Tiquet wrote:> Thanks, that worked!! > > It needed "-Ff" > > The pool has been recovered with minimal loss in data.
> Your original zpool status says that this pool was last accessed on > another system, which I believe is what caused of the pool to fail, > particularly if it was accessed simultaneously from two systems.The message "last accessed on another system" is the normal behavior if the pool is ungracefully offlined for some reason, and then you boot back up again on the same system. I learned that by using a pool on an external disk, and accidentally knocking out the power cord of the external disk. The system hung. I power cycled, couldn''t boot normal. Had to boot failsafe, and got the above message while trying to import.