Robert Milkowski
2010-Jul-28 23:11 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]
fyi
--
Robert Milkowski
http://milek.blogspot.com
-------- Original Message --------
Subject: zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley <tim.haley at oracle.com>
To: PSARC-ext at sun.com
CC: zfs-team at sun.com
I am sponsoring the following case for George Wilson. Requested binding
is micro/patch. Since this is a straight-forward addition of a command
line option, I think itqualifies for self review. If an ARC member
disagrees, let me know and I''ll convert to a fast-track.
Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
This information is Copyright (c) 2010, Oracle and/or its affiliates.
All rights reserved.
1. Introduction
1.1. Project/Component Working Name:
zpool import despite missing log
1.2. Name of Document Author/Supplier:
Author: George Wilson
1.3 Date of This Document:
26 July, 2010
4. Technical Description
OVERVIEW:
ZFS maintains a GUID (global unique identifier) on each device and
the sum of all GUIDs of a pool are stored into the ZFS uberblock.
This sum is used to determine the availability of all vdevs
within a pool when a pool is imported or opened. Pools which
contain a separate intent log device (e.g. a slog) will fail to
import when that device is removed or is otherwise unavailable.
This proposal aims to address this particular issue.
PROPOSED SOLUTION:
This fast-track introduce a new command line flag to the
''zpool import'' sub-command. This new option,
''-m'', allows
pools to import even when a log device is missing. The contents
of that log device are obviously discarded and the pool will
operate as if the log device were offlined.
MANPAGE DIFFS:
zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
cachefile]
- [-D] [-f] [-R root] [-n] [-F] -a
+ [-D] [-f] [-m] [-R root] [-n] [-F] -a
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
cachefile]
- [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
+ [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]
zpool import [-o mntopts] [ -o property=value] ... [-d dir |
- -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
+ -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a
Imports all pools found in the search directories.
Identical to the previous command, except that all pools
+ -m
+
+ Allows a pool to import when there is a missing log device
EXAMPLES:
1). Configuration with a single intent log device:
# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
logs
c5t0d0 ONLINE 0 0 0
errors: No known data errors
# zpool import tank
The devices below are missing, use ''-m'' to import the pool
anyway:
c5t0d0 [log]
cannot import ''tank'': one or more devices is currently
unavailable
# zpool import -m tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool
online''.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
c7t0d0 ONLINE 0 0 0
logs
1693927398582730352 UNAVAIL 0 0 0 was
/dev/dsk/c5t0d0
errors: No known data errors
2). Configuration with mirrored intent log device:
# zpool add tank log mirror c5t0d0 c5t1d0
zroot at diskmonster:/dev/dsk# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
c5t0d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
errors: No known data errors
# zpool import 429789444028972405
The devices below are missing, use ''-m'' to import the pool
anyway:
mirror-1 [log]
c5t0d0
c5t1d0
# zpool import -m tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool
online''.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
c7t0d0 ONLINE 0 0 0
logs
mirror-1 UNAVAIL 0 0 0
insufficient replicas
46385995713041169 UNAVAIL 0 0 0 was
/dev/dsk/c5t0d0
13821442324672734438 UNAVAIL 0 0 0 was
/dev/dsk/c5t1d0
errors: No known data errors
6. Resources and Schedule
6.4. Steering Committee requested information
6.4.1. Consolidation C-team Name:
ON
6.5. ARC review type: Automatic
6.6. ARC Exposure: open
_______________________________________________
opensolaris-arc mailing list
opensolaris-arc at opensolaris.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100729/21bf6027/attachment.html>
James Dickens
2010-Jul-29 03:29 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]
+1 On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski <milek at task.gda.pl> wrote:> > fyi > > -- > Robert Milkowski > http://milek.blogspot.com > > > -------- Original Message -------- Subject: zpool import despite missing > log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From: > Tim Haley <tim.haley at oracle.com> <tim.haley at oracle.com> To: > PSARC-ext at sun.com CC: zfs-team at sun.com > > I am sponsoring the following case for George Wilson. Requested binding > is micro/patch. Since this is a straight-forward addition of a command > line option, I think itqualifies for self review. If an ARC member > disagrees, let me know and I''ll convert to a fast-track. > > Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI > This information is Copyright (c) 2010, Oracle and/or its affiliates. > All rights reserved. > 1. Introduction > 1.1. Project/Component Working Name: > zpool import despite missing log > 1.2. Name of Document Author/Supplier: > Author: George Wilson > 1.3 Date of This Document: > 26 July, 2010 > > 4. Technical Description > > OVERVIEW: > > ZFS maintains a GUID (global unique identifier) on each device and > the sum of all GUIDs of a pool are stored into the ZFS uberblock. > This sum is used to determine the availability of all vdevs > within a pool when a pool is imported or opened. Pools which > contain a separate intent log device (e.g. a slog) will fail to > import when that device is removed or is otherwise unavailable. > This proposal aims to address this particular issue. > > PROPOSED SOLUTION: > > This fast-track introduce a new command line flag to the > ''zpool import'' sub-command. This new option, ''-m'', allows > pools to import even when a log device is missing. The contents > of that log device are obviously discarded and the pool will > operate as if the log device were offlined. > > MANPAGE DIFFS: > > zpool import [-o mntopts] [-p property=value] ... [-d dir | -c > cachefile] > - [-D] [-f] [-R root] [-n] [-F] -a > + [-D] [-f] [-m] [-R root] [-n] [-F] -a > > > zpool import [-o mntopts] [-o property=value] ... [-d dir | -c > cachefile] > - [-D] [-f] [-R root] [-n] [-F] pool |id [newpool] > + [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool] > > zpool import [-o mntopts] [ -o property=value] ... [-d dir | > - -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a > + -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a > > Imports all pools found in the search directories. > Identical to the previous command, except that all pools > > + -m > + > + Allows a pool to import when there is a missing log device > > EXAMPLES: > > 1). Configuration with a single intent log device: > > # zpool status tank > pool: tank > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > c5t0d0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool import tank > The devices below are missing, use ''-m'' to import the pool anyway: > c5t0d0 [log] > > cannot import ''tank'': one or more devices is currently unavailable > > # zpool import -m tank > # zpool status tank > pool: tank > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-2Q > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > 1693927398582730352 UNAVAIL 0 0 0 was > /dev/dsk/c5t0d0 > > errors: No known data errors > > 2). Configuration with mirrored intent log device: > > # zpool add tank log mirror c5t0d0 c5t1d0 > zroot at diskmonster:/dev/dsk# zpool status tank > pool: tank > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > mirror-1 ONLINE 0 0 0 > c5t0d0 ONLINE 0 0 0 > c5t1d0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool import 429789444028972405 > The devices below are missing, use ''-m'' to import the pool anyway: > mirror-1 [log] > c5t0d0 > c5t1d0 > > # zpool import -m tank > # zpool status tank > pool: tank > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-2Q > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > mirror-1 UNAVAIL 0 0 0 > insufficient replicas > 46385995713041169 UNAVAIL 0 0 0 was > /dev/dsk/c5t0d0 > 13821442324672734438 UNAVAIL 0 0 0 was > /dev/dsk/c5t1d0 > > errors: No known data errors > > 6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: Automatic > 6.6. ARC Exposure: open > > _______________________________________________ > opensolaris-arc mailing listopensolaris-arc at opensolaris.org > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100728/3c5dee63/attachment.html>
Richard Elling
2010-Jul-29 17:57 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]
On Jul 28, 2010, at 4:11 PM, Robert Milkowski wrote:> > fyiThis covers the case where an exported pool has lost its log. zpool export [log disk or all disks in a mirrored log disappear] zpool import -- currently fails, missing top-level vdev The following cases are already recoverable: If the pool is not exported and the log disappears, then the pool can import ok if the zpool.cache file is current. *crash* [log disk or all disks in a mirrored log disappear] zpool import -- succeeds, pool state is updated keep on truckin'' If the log device fails while the pool is imported, then the pool marks the device as failed. [log disk or all disks in a mirrored log disappear] report error, change pool state to show failed log device keep on truckin'' -- richard> > -- > Robert Milkowski > http://milek.blogspot.com > > > -------- Original Message -------- > Subject: zpool import despite missing log [PSARC/2010/292 Self Review] > Date: Mon, 26 Jul 2010 08:38:22 -0600 > From: Tim Haley <tim.haley at oracle.com> > To: PSARC-ext at sun.com > CC: zfs-team at sun.com > > I am sponsoring the following case for George Wilson. Requested binding > is micro/patch. Since this is a straight-forward addition of a command > line option, I think itqualifies for self review. If an ARC member > disagrees, let me know and I''ll convert to a fast-track. > > Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI > This information is Copyright (c) 2010, Oracle and/or its affiliates. > All rights reserved. > 1. Introduction > 1.1. Project/Component Working Name: > zpool import despite missing log > 1.2. Name of Document Author/Supplier: > Author: George Wilson > 1.3 Date of This Document: > 26 July, 2010 > > 4. Technical Description > > OVERVIEW: > > ZFS maintains a GUID (global unique identifier) on each device and > the sum of all GUIDs of a pool are stored into the ZFS uberblock. > This sum is used to determine the availability of all vdevs > within a pool when a pool is imported or opened. Pools which > contain a separate intent log device (e.g. a slog) will fail to > import when that device is removed or is otherwise unavailable. > This proposal aims to address this particular issue. > > PROPOSED SOLUTION: > > This fast-track introduce a new command line flag to the > ''zpool import'' sub-command. This new option, ''-m'', allows > pools to import even when a log device is missing. The contents > of that log device are obviously discarded and the pool will > operate as if the log device were offlined. > > MANPAGE DIFFS: > > zpool import [-o mntopts] [-p property=value] ... [-d dir | -c > cachefile] > - [-D] [-f] [-R root] [-n] [-F] -a > + [-D] [-f] [-m] [-R root] [-n] [-F] -a > > > zpool import [-o mntopts] [-o property=value] ... [-d dir | -c > cachefile] > - [-D] [-f] [-R root] [-n] [-F] pool |id [newpool] > + [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool] > > zpool import [-o mntopts] [ -o property=value] ... [-d dir | > - -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a > + -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a > > Imports all pools found in the search directories. > Identical to the previous command, except that all pools > > + -m > + > + Allows a pool to import when there is a missing log device > > EXAMPLES: > > 1). Configuration with a single intent log device: > > # zpool status tank > pool: tank > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > c5t0d0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool import tank > The devices below are missing, use ''-m'' to import the pool anyway: > c5t0d0 [log] > > cannot import ''tank'': one or more devices is currently unavailable > > # zpool import -m tank > # zpool status tank > pool: tank > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: > http://www.sun.com/msg/ZFS-8000-2Q > > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > 1693927398582730352 UNAVAIL 0 0 0 was > /dev/dsk/c5t0d0 > > errors: No known data errors > > 2). Configuration with mirrored intent log device: > > # zpool add tank log mirror c5t0d0 c5t1d0 > zroot at diskmonster:/dev/dsk# zpool status tank > pool: tank > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > mirror-1 ONLINE 0 0 0 > c5t0d0 ONLINE 0 0 0 > c5t1d0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool import 429789444028972405 > The devices below are missing, use ''-m'' to import the pool anyway: > mirror-1 [log] > c5t0d0 > c5t1d0 > > # zpool import -m tank > # zpool status tank > pool: tank > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: > http://www.sun.com/msg/ZFS-8000-2Q > > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > c7t0d0 ONLINE 0 0 0 > logs > mirror-1 UNAVAIL 0 0 0 > insufficient replicas > 46385995713041169 UNAVAIL 0 0 0 was > /dev/dsk/c5t0d0 > 13821442324672734438 UNAVAIL 0 0 0 was > /dev/dsk/c5t1d0 > > errors: No known data errors > > 6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: Automatic > 6.6. ARC Exposure: open > > _______________________________________________ > opensolaris-arc mailing list > > opensolaris-arc at opensolaris.org > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Richard Elling richard at nexenta.com +1-760-896-4422 Enterprise class storage for everyone www.nexenta.com
Dmitry Sorokin
2010-Jul-31 02:49 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]
Thanks for the update Robert.
Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).
I couldn''t compile logfix binary either, so I ran out of any ideas of
how I can recover this zpool.
So for now it just sits there untouched.
This proposed improvement to zfs is definetely a hope for me.
When do you think it''ll be implemented (roughly - this year, early next
year....) and would I be able to import this pool at it''s current
version 22 (snv_129)?
[root at storage ~]# zpool import
pool: tank
id: 1346464136813319526
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
tank UNAVAIL missing device
raidz2-0 ONLINE
c4t0d0 ONLINE
c4t1d0 ONLINE
c4t2d0 ONLINE
c4t3d0 ONLINE
c4t4d0 ONLINE
c4t5d0 ONLINE
c4t6d0 ONLINE
c4t7d0 ONLINE
[root at storage ~]#
Bets regards,
Dmitry
From: zfs-discuss-bounces at opensolaris.org
[mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Robert
Milkowski
Sent: Wednesday, July 28, 2010 7:12 PM
To: ZFS Discussions
Subject: [zfs-discuss] Fwd: zpool import despite missing log
[PSARC/2010/292Self Review]
fyi
--
Robert Milkowski
http://milek.blogspot.com
-------- Original Message --------
Subject:
zpool import despite missing log [PSARC/2010/292 Self Review]
Date:
Mon, 26 Jul 2010 08:38:22 -0600
From:
Tim Haley <tim.haley at oracle.com> <mailto:tim.haley at oracle.com>
To:
PSARC-ext at sun.com
CC:
zfs-team at sun.com
I am sponsoring the following case for George Wilson. Requested binding
is micro/patch. Since this is a straight-forward addition of a command
line option, I think itqualifies for self review. If an ARC member
disagrees, let me know and I''ll convert to a fast-track.
Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
This information is Copyright (c) 2010, Oracle and/or its affiliates.
All rights reserved.
1. Introduction
1.1. Project/Component Working Name:
zpool import despite missing log
1.2. Name of Document Author/Supplier:
Author: George Wilson
1.3 Date of This Document:
26 July, 2010
4. Technical Description
OVERVIEW:
ZFS maintains a GUID (global unique identifier) on each device
and
the sum of all GUIDs of a pool are stored into the ZFS
uberblock.
This sum is used to determine the availability of all vdevs
within a pool when a pool is imported or opened. Pools which
contain a separate intent log device (e.g. a slog) will fail to
import when that device is removed or is otherwise unavailable.
This proposal aims to address this particular issue.
PROPOSED SOLUTION:
This fast-track introduce a new command line flag to the
''zpool import'' sub-command. This new option,
''-m'', allows
pools to import even when a log device is missing. The
contents
of that log device are obviously discarded and the pool will
operate as if the log device were offlined.
MANPAGE DIFFS:
zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
cachefile]
- [-D] [-f] [-R root] [-n] [-F] -a
+ [-D] [-f] [-m] [-R root] [-n] [-F] -a
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
cachefile]
- [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
+ [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]
zpool import [-o mntopts] [ -o property=value] ... [-d dir |
- -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
+ -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a
Imports all pools found in the search directories.
Identical to the previous command, except that all pools
+ -m
+
+ Allows a pool to import when there is a missing log device
EXAMPLES:
1). Configuration with a single intent log device:
# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
logs
c5t0d0 ONLINE 0 0 0
errors: No known data errors
# zpool import tank
The devices below are missing, use ''-m'' to import the pool
anyway:
c5t0d0 [log]
cannot import ''tank'': one or more devices is currently
unavailable
# zpool import -m tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool
online''.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
c7t0d0 ONLINE 0 0 0
logs
1693927398582730352 UNAVAIL 0 0 0 was
/dev/dsk/c5t0d0
errors: No known data errors
2). Configuration with mirrored intent log device:
# zpool add tank log mirror c5t0d0 c5t1d0
zroot at diskmonster:/dev/dsk# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
c5t0d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
errors: No known data errors
# zpool import 429789444028972405
The devices below are missing, use ''-m'' to import the pool
anyway:
mirror-1 [log]
c5t0d0
c5t1d0
# zpool import -m tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas
exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool
online''.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
c7t0d0 ONLINE 0 0 0
logs
mirror-1 UNAVAIL 0 0 0
insufficient replicas
46385995713041169 UNAVAIL 0 0 0 was
/dev/dsk/c5t0d0
13821442324672734438 UNAVAIL 0 0 0 was
/dev/dsk/c5t1d0
errors: No known data errors
6. Resources and Schedule
6.4. Steering Committee requested information
6.4.1. Consolidation C-team Name:
ON
6.5. ARC review type: Automatic
6.6. ARC Exposure: open
_______________________________________________
opensolaris-arc mailing list
opensolaris-arc at opensolaris.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100730/771fbab2/attachment.html>
George Wilson
2010-Jul-31 02:58 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]
Dmitry Sorokin wrote:> > > Thanks for the update Robert. > > > > Currently I have failed zpool with slog missing, which I was unable to > recover, although I was able to find out what the GUID was for the slog > device (below is the uotput of zpool import command). > > I couldn?t compile logfix binary either, so I ran out of any ideas of > how I can recover this zpool. > > So for now it just sits there untouched. > > This proposed improvement to zfs is definetely a hope for me. > > When do you think it?ll be implemented (roughly ? this year, early next > year?.) and would I be able to import this pool at it?s current version > 22 (snv_129)?Dmitry, I can''t comment on when this will be available but I can tell you that it will work with version 22. This requires that you have a pool that is running a minimum of version 19. Thanks, George> > > > > > [root at storage ~]# zpool import > > pool: tank > > id: 1346464136813319526 > > state: UNAVAIL > > status: The pool was last accessed by another system. > > action: The pool cannot be imported due to damaged devices or data. > > see: http://www.sun.com/msg/ZFS-8000-EY > > config: > > > > tank UNAVAIL missing device > > raidz2-0 ONLINE > > c4t0d0 ONLINE > > c4t1d0 ONLINE > > c4t2d0 ONLINE > > c4t3d0 ONLINE > > c4t4d0 ONLINE > > c4t5d0 ONLINE > > c4t6d0 ONLINE > > c4t7d0 ONLINE > > [root at storage ~]# > > > > Bets regards, > > Dmitry > > > > > > *From:* zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org] *On Behalf Of *Robert Milkowski > *Sent:* Wednesday, July 28, 2010 7:12 PM > *To:* ZFS Discussions > *Subject:* [zfs-discuss] Fwd: zpool import despite missing log > [PSARC/2010/292Self Review] > > > > > fyi > > -- > Robert Milkowski > http://milek.blogspot.com > > > -------- Original Message -------- > > *Subject: * > > > > zpool import despite missing log [PSARC/2010/292 Self Review] > > *Date: * > > > > Mon, 26 Jul 2010 08:38:22 -0600 > > *From: * > > > > Tim Haley <tim.haley at oracle.com> <mailto:tim.haley at oracle.com> > > *To: * > > > > PSARC-ext at sun.com <mailto:PSARC-ext at sun.com> > > *CC: * > > > > zfs-team at sun.com <mailto:zfs-team at sun.com> > > > > I am sponsoring the following case for George Wilson. Requested binding > > is micro/patch. Since this is a straight-forward addition of a command > > line option, I think itqualifies for self review. If an ARC member > > disagrees, let me know and I''ll convert to a fast-track. > > > > Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI > > This information is Copyright (c) 2010, Oracle and/or its affiliates. > > All rights reserved. > > 1. Introduction > > 1.1. Project/Component Working Name: > > zpool import despite missing log > > 1.2. Name of Document Author/Supplier: > > Author: George Wilson > > 1.3 Date of This Document: > > 26 July, 2010 > > > > 4. Technical Description > > > > OVERVIEW: > > > > ZFS maintains a GUID (global unique identifier) on each device and > > the sum of all GUIDs of a pool are stored into the ZFS uberblock. > > This sum is used to determine the availability of all vdevs > > within a pool when a pool is imported or opened. Pools which > > contain a separate intent log device (e.g. a slog) will fail to > > import when that device is removed or is otherwise unavailable. > > This proposal aims to address this particular issue. > > > > PROPOSED SOLUTION: > > > > This fast-track introduce a new command line flag to the > > ''zpool import'' sub-command. This new option, ''-m'', allows > > pools to import even when a log device is missing. The contents > > of that log device are obviously discarded and the pool will > > operate as if the log device were offlined. > > > > MANPAGE DIFFS: > > > > zpool import [-o mntopts] [-p property=value] ... [-d dir | -c > > cachefile] > > - [-D] [-f] [-R root] [-n] [-F] -a > > + [-D] [-f] [-m] [-R root] [-n] [-F] -a > > > > > > zpool import [-o mntopts] [-o property=value] ... [-d dir | -c > > cachefile] > > - [-D] [-f] [-R root] [-n] [-F] pool |id [newpool] > > + [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool] > > > > zpool import [-o mntopts] [ -o property=value] ... [-d dir | > > - -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a > > + -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a > > > > Imports all pools found in the search directories. > > Identical to the previous command, except that all pools > > > > + -m > > + > > + Allows a pool to import when there is a missing log device > > > > EXAMPLES: > > > > 1). Configuration with a single intent log device: > > > > # zpool status tank > > pool: tank > > state: ONLINE > > scan: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank ONLINE 0 0 0 > > c7t0d0 ONLINE 0 0 0 > > logs > > c5t0d0 ONLINE 0 0 0 > > > > errors: No known data errors > > > > # zpool import tank > > The devices below are missing, use ''-m'' to import the pool anyway: > > c5t0d0 [log] > > > > cannot import ''tank'': one or more devices is currently unavailable > > > > # zpool import -m tank > > # zpool status tank > > pool: tank > > state: DEGRADED > > status: One or more devices could not be opened. Sufficient replicas > > exist for > > the pool to continue functioning in a degraded state. > > action: Attach the missing device and online it using ''zpool online''. > > see: http://www.sun.com/msg/ZFS-8000-2Q > > scan: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank DEGRADED 0 0 0 > > c7t0d0 ONLINE 0 0 0 > > logs > > 1693927398582730352 UNAVAIL 0 0 0 was > > /dev/dsk/c5t0d0 > > > > errors: No known data errors > > > > 2). Configuration with mirrored intent log device: > > > > # zpool add tank log mirror c5t0d0 c5t1d0 > > zroot at diskmonster:/dev/dsk# zpool status tank > > pool: tank > > state: ONLINE > > scan: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank ONLINE 0 0 0 > > c7t0d0 ONLINE 0 0 0 > > logs > > mirror-1 ONLINE 0 0 0 > > c5t0d0 ONLINE 0 0 0 > > c5t1d0 ONLINE 0 0 0 > > > > errors: No known data errors > > > > # zpool import 429789444028972405 > > The devices below are missing, use ''-m'' to import the pool anyway: > > mirror-1 [log] > > c5t0d0 > > c5t1d0 > > > > # zpool import -m tank > > # zpool status tank > > pool: tank > > state: DEGRADED > > status: One or more devices could not be opened. Sufficient replicas > > exist for > > the pool to continue functioning in a degraded state. > > action: Attach the missing device and online it using ''zpool online''. > > see: http://www.sun.com/msg/ZFS-8000-2Q > > scan: none requested > > config: > > > > NAME STATE READ WRITE CKSUM > > tank DEGRADED 0 0 0 > > c7t0d0 ONLINE 0 0 0 > > logs > > mirror-1 UNAVAIL 0 0 0 > > insufficient replicas > > 46385995713041169 UNAVAIL 0 0 0 was > > /dev/dsk/c5t0d0 > > 13821442324672734438 UNAVAIL 0 0 0 was > > /dev/dsk/c5t1d0 > > > > errors: No known data errors > > > > 6. Resources and Schedule > > 6.4. Steering Committee requested information > > 6.4.1. Consolidation C-team Name: > > ON > > 6.5. ARC review type: Automatic > > 6.6. ARC Exposure: open > > > > _______________________________________________ > > opensolaris-arc mailing list > > opensolaris-arc at opensolaris.org <mailto:opensolaris-arc at opensolaris.org> > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Doyle
2010-Aug-01 05:40 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]
A solution to this problem would be my early Christmas present!
Here is how I lost access to an otherwise healthy mirrored pool two months ago:
Box running snv_130 with two disks in a mirror and an iRAM battery-backed
ZIL device was shutdown orderly and powered down normally. While I was away
on travel, the PSU in the PC died while in its lowest-power standby state - this
caused the Li battery in the iRAM to discharge and all of the SLOG contents in
the DRAM went poof.
Powered box back up... zpool import -f tank failed to bring the pool back
online.
After much research, I found the ''logfix'' tool, got it compile
on another snv_122 box and followed the directions to synthesize a
"forged" log device header using the guid of the original device
extracted from vdev list. This failed to work
despite the binary tool running and some inspection of the guids using zdb -l
spoofed_new_logdev.
What''s intrigueing is that zpool is not even properly reporting the
''missing device''. See the output below from zpool, then zdb -
notice that zdb shows
the remnants of a vdev for a log device but with guid = 0 ????
# zpool import
pool: tank
id: 6218740473633775200
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:
[b]
tank UNAVAIL missing device
mirror-0 ONLINE
c0t1d0 ONLINE
c0t2d0 ONLINE
[/b]
Additional devices are known to be part of this pool, though their
# zdb -e tank
Configuration for import:
vdev_children: 2
version: 22
pool_guid: 6218740473633775200
name: ''tank''
state: 0
hostid: 9271202
hostname: ''eon''
vdev_tree:
type: ''root''
id: 0
guid: 6218740473633775200
children[0]:
type: ''mirror''
id: 0
guid: 5245507142600321917
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000188936192
is_log: 0
children[0]:
type: ''disk''
id: 0
guid: 15634594394239615149
phys_path: ''/pci at 0,0/pci1458,b002 at 11/disk at
2,0:a''
whole_disk: 1
DTL: 55
path: ''/dev/dsk/c0t1d0s0''
devid: ''id1,sd at
SATA_____ST31000333AS________________9TE1JX8C/a''
children[1]:
type: ''disk''
id: 1
guid: 3144903288495510072
phys_path: ''/pci at 0,0/pci1458,b002 at 11/disk at
1,0:a''
whole_disk: 1
DTL: 54
path: ''/dev/dsk/c0t2d0s0''
devid: ''id1,sd at
SATA_____ST31000528AS________________9VP2KWAM/a''
[b]
children[1]:
type: ''missing''
id: 1
guid: 0
[/b]
--
This message posted from opensolaris.org
Dmitry Sorokin
2010-Dec-23 15:38 UTC
[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]
Yesterday I was able to import zpool with missing log device using
"zpool import -f -m myzpool" command.
I had to boot from Oracle Solaris Express Live CD. Then I just did
"zpool remove myzpool logdevice"
That''s it. Now I''ve got my pool back with all the data and
with ONLINE
status.
I had my zpool (with 8 x 500 GB disks) sitting for almost 6 months
unavailable.
This was my Christmas present!
Best regards,
Dmitry
Office phone: 905.625.6471 ext. 104
Cell phone: 416.529.1627
-----Original Message-----
From: zfs-discuss-bounces at opensolaris.org
[mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Jim Doyle
Sent: Sunday, August 01, 2010 1:40 AM
To: zfs-discuss at opensolaris.org
Subject: Re: [zfs-discuss] Fwd: zpool import despite missing log
[PSARC/2010/292Self Review]
A solution to this problem would be my early Christmas present!
Here is how I lost access to an otherwise healthy mirrored pool two
months ago:
Box running snv_130 with two disks in a mirror and an iRAM
battery-backed ZIL device was shutdown orderly and powered down
normally. While I was away on travel, the PSU in the PC died while in
its lowest-power standby state - this caused the Li battery in the iRAM
to discharge and all of the SLOG contents in the DRAM went poof.
Powered box back up... zpool import -f tank failed to bring the pool
back online.
After much research, I found the ''logfix'' tool, got it compile
on
another snv_122 box and followed the directions to synthesize a
"forged"
log device header using the guid of the original device extracted from
vdev list. This failed to work despite the binary tool running and some
inspection of the guids using zdb -l spoofed_new_logdev.
What''s intrigueing is that zpool is not even properly reporting the
''missing device''. See the output below from zpool, then zdb -
notice
that zdb shows the remnants of a vdev for a log device but with guid = 0
????
# zpool import
pool: tank
id: 6218740473633775200
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
config:
[b]
tank UNAVAIL missing device
mirror-0 ONLINE
c0t1d0 ONLINE
c0t2d0 ONLINE
[/b]
Additional devices are known to be part of this pool, though
their
# zdb -e tank
Configuration for import:
vdev_children: 2
version: 22
pool_guid: 6218740473633775200
name: ''tank''
state: 0
hostid: 9271202
hostname: ''eon''
vdev_tree:
type: ''root''
id: 0
guid: 6218740473633775200
children[0]:
type: ''mirror''
id: 0
guid: 5245507142600321917
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000188936192
is_log: 0
children[0]:
type: ''disk''
id: 0
guid: 15634594394239615149
phys_path: ''/pci at 0,0/pci1458,b002 at 11/disk at
2,0:a''
whole_disk: 1
DTL: 55
path: ''/dev/dsk/c0t1d0s0''
devid:
''id1,sd at SATA_____ST31000333AS________________9TE1JX8C/a''
children[1]:
type: ''disk''
id: 1
guid: 3144903288495510072
phys_path: ''/pci at 0,0/pci1458,b002 at 11/disk at
1,0:a''
whole_disk: 1
DTL: 54
path: ''/dev/dsk/c0t2d0s0''
devid:
''id1,sd at SATA_____ST31000528AS________________9VP2KWAM/a''
[b]
children[1]:
type: ''missing''
id: 1
guid: 0
[/b]
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss at opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss