Michael Armbrust
2008-Mar-31  00:25 UTC
[zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta
Hello,
I have a pool of four raidz-ed drives that I created in BSD that I would
like to move to a box with a solaris kernel.  However, when I run zpool
import it displays the following message:
  pool: store
    id: 7369085894363868358
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:
        store       UNAVAIL  insufficient replicas
          raidz1    UNAVAIL  corrupted data
            c1d1s8  UNAVAIL  corrupted data
            c1d0s2  ONLINE
            c2d0s2  ONLINE
            c2d1p0  ONLINE
I know the drives/sata card are actually good, because when I move it back
to the old system it imports without a problem.  Why can''t Solaris
import
this pool?  Even if one of the drives did have corrupted data, why
can''t I
import anyway and just recreate it from parity information?  Any ideas?
Thanks!
Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080330/3186aacb/attachment.html>
What''s on the rest of the disk? On Sun, Mar 30, 2008 at 7:25 PM, Michael Armbrust < michael.armbrust at gmail.com> wrote:> Hello, > > I have a pool of four raidz-ed drives that I created in BSD that I would > like to move to a box with a solaris kernel. However, when I run zpool > import it displays the following message: > > pool: store > id: 7369085894363868358 > state: UNAVAIL > status: The pool was last accessed by another system. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > store UNAVAIL insufficient replicas > raidz1 UNAVAIL corrupted data > c1d1s8 UNAVAIL corrupted data > c1d0s2 ONLINE > c2d0s2 ONLINE > c2d1p0 ONLINE > > I know the drives/sata card are actually good, because when I move it back > to the old system it imports without a problem. Why can''t Solaris import > this pool? Even if one of the drives did have corrupted data, why can''t I > import anyway and just recreate it from parity information? Any ideas? > > Thanks! > > Michael > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080330/421c618f/attachment.html>
Michael Armbrust
2008-Mar-31  04:11 UTC
[zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta
On Mar 30, 2008, at 7:42 PM, Tim wrote:> What''s on the rest of the disk?Nothing, when I created the pool I used the entire disk.> On Sun, Mar 30, 2008 at 7:25 PM, Michael Armbrust <michael.armbrust at gmail.com > > wrote: > Hello, > > I have a pool of four raidz-ed drives that I created in BSD that I > would like to move to a box with a solaris kernel. However, when I > run zpool import it displays the following message: > > pool: store > id: 7369085894363868358 > state: UNAVAIL > status: The pool was last accessed by another system. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > store UNAVAIL insufficient replicas > raidz1 UNAVAIL corrupted data > c1d1s8 UNAVAIL corrupted data > c1d0s2 ONLINE > c2d0s2 ONLINE > c2d1p0 ONLINE > > I know the drives/sata card are actually good, because when I move > it back to the old system it imports without a problem. Why can''t > Solaris import this pool? Even if one of the drives did have > corrupted data, why can''t I import anyway and just recreate it from > parity information? Any ideas? > > Thanks! > > Michael > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080330/fe981f9a/attachment.html>
Perhaps someone else can correct me if I''m wrong, but if you''re using the whole disk, ZFS shouldn''t be displaying a slice when listing your disks, should it? I''ve *NEVER* seen it do that on any of mine except when using partials/slicese. I would expect: c1d1s8 To be: c1d1 On Sun, Mar 30, 2008 at 11:11 PM, Michael Armbrust < michael.armbrust at gmail.com> wrote:> > On Mar 30, 2008, at 7:42 PM, Tim wrote: > > What''s on the rest of the disk? > > > Nothing, when I created the pool I used the entire disk. > > On Sun, Mar 30, 2008 at 7:25 PM, Michael Armbrust < > michael.armbrust at gmail.com> wrote: > > > Hello, > > > > I have a pool of four raidz-ed drives that I created in BSD that I would > > like to move to a box with a solaris kernel. However, when I run zpool > > import it displays the following message: > > > > pool: store > > id: 7369085894363868358 > > state: UNAVAIL > > status: The pool was last accessed by another system. > > action: The pool cannot be imported due to damaged devices or data. > > see: http://www.sun.com/msg/ZFS-8000-EY > > config: > > > > store UNAVAIL insufficient replicas > > raidz1 UNAVAIL corrupted data > > c1d1s8 UNAVAIL corrupted data > > c1d0s2 ONLINE > > c2d0s2 ONLINE > > c2d1p0 ONLINE > > > > I know the drives/sata card are actually good, because when I move it > > back to the old system it imports without a problem. Why can''t Solaris > > import this pool? Even if one of the drives did have corrupted data, why > > can''t I import anyway and just recreate it from parity information? Any > > ideas? > > > > Thanks! > > > > Michael > > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080331/a9edfed9/attachment.html>
Bob Friesenhahn
2008-Mar-31  15:35 UTC
[zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta
On Mon, 31 Mar 2008, Tim wrote:> Perhaps someone else can correct me if I''m wrong, but if you''re using the > whole disk, ZFS shouldn''t be displaying a slice when listing your disks, > should it? I''ve *NEVER* seen it do that on any of mine except when using > partials/slicese. > > I would expect: > c1d1s8 > > To be: > c1d1Yes, this seems suspicious. It is also suspicious that some devices use ''p'' ("partition"?) while others use ''s'' ("slice"?). The partitions may be FreeBSD partitions or some other type that Solaris is not expecting. FreeBSD can partition at a level visible to the BIOS and it can further sub-partition a FreeBSD partition for use in individual filesystems. Regardless, I am very interested to hear if ZFS pools can really be transferred back and forth between Solaris and FreeBSD. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Michael Armbrust
2008-Apr-01  17:14 UTC
[zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta
On Mon, Mar 31, 2008 at 8:35 AM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Mon, 31 Mar 2008, Tim wrote: > > > Perhaps someone else can correct me if I''m wrong, but if you''re using > the > > whole disk, ZFS shouldn''t be displaying a slice when listing your disks, > > should it? I''ve *NEVER* seen it do that on any of mine except when > using > > partials/slicese. > > > > I would expect: > > c1d1s8 > > > > To be: > > c1d1 > > Yes, this seems suspicious. It is also suspicious that some devices > use ''p'' ("partition"?) while others use ''s'' ("slice"?).I agree that its really weird that its trying to look at partitions and slices, when BSD has no problem recognizing the whole disks. Is there any way to override where zfs import is looking, or am I going to have to recreate the pool from scratch? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080401/9e41fd10/attachment.html>
Michael Armbrust
2008-Apr-02  07:29 UTC
[zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta
I seem to have made some progress.  For some reason when I ran prtvtoc there
was no slice 0.  I added it such that it would occupy the entire disk, and
now when I run an import it looks like this:
zpool import
  pool: store
    id: 7369085894363868358
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:
        store       UNAVAIL  insufficient replicas
          raidz1    UNAVAIL  corrupted data
            c1d1s0  ONLINE
            c1d0s0  ONLINE
            c2d0s0  ONLINE
            c2d1s0  ONLINE
All the disks are there and ONLINE now, why can''t it import the pool? 
Is
there some other command I need to run?
When I run zdb I get the following:
zdb -l /dev/dsk/c1d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
    version=6
    name=''store''
    state=0
    txg=4369529
    pool_guid=7369085894363868358
    hostid=4292973054
    hostname=''athx2.CS.Berkeley.EDU''
    top_guid=5337248985013830096
    guid=16106001097676021869
    vdev_tree
        type=''raidz''
        id=0
        guid=5337248985013830096
        nparity=1
        metaslab_array=14
        metaslab_shift=32
        ashift=9
        asize=1600334594048
        children[0]
                type=''disk''
                id=0
                guid=4933540318280305407
                path=''/dev/ad8''
                devid=''ad:9QG141V6''
                whole_disk=0
                DTL=203
        children[1]
                type=''disk''
                id=1
                guid=16106001097676021869
                path=''/dev/ad4''
                devid=''ad:9QG141S4''
                whole_disk=0
                DTL=202
        children[2]
                type=''disk''
                id=2
                guid=6324746170060182490
                path=''/dev/ad6''
                devid=''ad:3PM03TV2''
                whole_disk=0
                DTL=201
        children[3]
                type=''disk''
                id=3
                guid=15119646966975037008
                path=''/dev/ad10''
                devid=''ad:9QG141SC''
                whole_disk=0
                DTL=200
--------------------------------------------
LABEL 1
--------------------------------------------
    version=6
    name=''store''
    state=0
    txg=4369529
    pool_guid=7369085894363868358
    hostid=4292973054
    hostname=''athx2.CS.Berkeley.EDU''
    top_guid=5337248985013830096
    guid=16106001097676021869
    vdev_tree
        type=''raidz''
        id=0
        guid=5337248985013830096
        nparity=1
        metaslab_array=14
        metaslab_shift=32
        ashift=9
        asize=1600334594048
        children[0]
                type=''disk''
                id=0
                guid=4933540318280305407
                path=''/dev/ad8''
                devid=''ad:9QG141V6''
                whole_disk=0
                DTL=203
        children[1]
                type=''disk''
                id=1
                guid=16106001097676021869
                path=''/dev/ad4''
                devid=''ad:9QG141S4''
                whole_disk=0
                DTL=202
        children[2]
                type=''disk''
                id=2
                guid=6324746170060182490
                path=''/dev/ad6''
                devid=''ad:3PM03TV2''
                whole_disk=0
                DTL=201
        children[3]
                type=''disk''
                id=3
                guid=15119646966975037008
                path=''/dev/ad10''
                devid=''ad:9QG141SC''
                whole_disk=0
                DTL=200
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
Thanks,
Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080402/22c861f6/attachment.html>