Hello All, I''ve recently run into an issue I can''t seem to resolve. I have been running a zpool populated with two RAID-Z1 VDEVs and a file on the (separate) OS drive for the ZIL: raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t2d0 ONLINE c12t3d0 ONLINE raidz1-2 ONLINE c12t4d0 ONLINE c12t5d0 ONLINE c13t0d0 ONLINE c13t1d0 ONLINE logs /ZIL-Log.img This was running on Nexenta Community Edition v3. Everything was going smoothly until today when the OS hard drive crashed and I was not able to boot from it any longer. I had migrated this setup from an OpenSolaris install some months back and I still had the old drive intact. I put it in the system, booted it up and tried to import the zpool. Unfortunately, I have not been successful. Previously when migrating from OSOL to Nexenta I was able to get the new system to recognize and import the ZIL device file. Since it has been lost in the drive crash I have not been able to duplicate that success. Here is the output from a ''zpool import'' command: pool: tank id: 9013303135438223804 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-EY config: tank UNAVAIL missing device raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t5d0 ONLINE c12t3d0 ONLINE raidz1-2 ONLINE c12t4d0 ONLINE c12t2d0 ONLINE c13t0d0 ONLINE c13t1d0 ONLINE I created a new file for the ZIL (using mkfile) and tried to specify it for inclusion with -d but it doesn''t get recognized. Probably because it was never part of the original zpool. I also symlinked the new ZIL file into /dev/dsk but that didn''t make any difference either. Any suggestions? Andrew Kener -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100706/8927c7e5/attachment.html>
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Andrew Kener > > the OS hard drive crashed [and log device]Here''s what I know: In zpool >= 19, if you import this, it will prompt you to confirm the loss of the log device, and then it will import. Here''s what I have heard: The ability to import with a failed log device as described above, was created right around zpool 14 or 15, not quite sure which. Here''s what I don''t know: If the failed zpool was some version which was too low ... and you try to import on an OS which is capable of a much higher version of zpool ... Can the newer OS handle it just because the newer OS is able to handle a newer version of zpool? Or maybe the version of the failed pool is the one that matters, regardless of what the new OS is capable of doing now?
erm..... using a file on another pool (your zpool) for the SLOG is NOT a good idea. Use an SSD or two (a mirror) or let ZFS deal with the ZIL on the drives. roy Hello All, I''ve recently run into an issue I can''t seem to resolve. I have been running a zpool populated with two RAID-Z1 VDEVs and a file on the (separate) OS drive for the ZIL: raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t2d0 ONLINE c12t3d0 ONLINE raidz1-2 ONLINE c12t4d0 ONLINE c12t5d0 ONLINE c13t0d0 ONLINE c13t1d0 ONLINE logs /ZIL-Log.img This was running on Nexenta Community Edition v3. Everything was going smoothly until today when the OS hard drive crashed and I was not able to boot from it any longer. I had migrated this setup from an OpenSolaris install some months back and I still had the old drive intact. I put it in the system, booted it up and tried to import the zpool. Unfortunately, I have not been successful. Previously when migrating from OSOL to Nexenta I was able to get the new system to recognize and import the ZIL device file. Since it has been lost in the drive crash I have not been able to duplicate that success. Here is the output from a ''zpool import'' command: pool: tank id: 9013303135438223804 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-EY config: tank UNAVAIL missing device raidz1-0 ONLINE c12t0d0 ONLINE c12t1d0 ONLINE c12t5d0 ONLINE c12t3d0 ONLINE raidz1-2 ONLINE c12t4d0 ONLINE c12t2d0 ONLINE c13t0d0 ONLINE c13t1d0 ONLINE I created a new file for the ZIL (using mkfile) and tried to specify it for inclusion with -d but it doesn''t get recognized. Probably because it was never part of the original zpool. I also symlinked the new ZIL file into /dev/dsk but that didn''t make any difference either. Any suggestions? Andrew Kener _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100707/a3c0b77e/attachment.html>
According to ''zpool upgrade'' my pool versions are are 22. All pools were upgraded several months ago, including the one in question. Here is what I get when I try to import: fileserver ~ # zpool import 9013303135438223804 cannot import ''tank'': pool may be in use from other system, it was last accessed by fileserver (hostid: 0x406155) on Tue Jul 6 10:46:13 2010 use ''-f'' to import anyway fileserver ~ # zpool import -f 9013303135438223804 cannot import ''tank'': one or more devices is currently unavailable Destroy and re-create the pool from a backup source. On Jul 6, 2010, at 11:48 PM, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Andrew Kener >> >> the OS hard drive crashed [and log device] > > Here''s what I know: In zpool >= 19, if you import this, it will prompt you > to confirm the loss of the log device, and then it will import. > > Here''s what I have heard: The ability to import with a failed log device as > described above, was created right around zpool 14 or 15, not quite sure > which. > > Here''s what I don''t know: If the failed zpool was some version which was > too low ... and you try to import on an OS which is capable of a much higher > version of zpool ... Can the newer OS handle it just because the newer OS is > able to handle a newer version of zpool? Or maybe the version of the failed > pool is the one that matters, regardless of what the new OS is capable of > doing now? >
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Andrew Kener > > According to ''zpool upgrade'' my pool versions are are 22. All pools > were upgraded several months ago, including the one in question. Here > is what I get when I try to import: > > fileserver ~ # zpool import 9013303135438223804 > cannot import ''tank'': pool may be in use from other system, it was last > accessed by fileserver (hostid: 0x406155) on Tue Jul 6 10:46:13 2010 > use ''-f'' to import anyway > > fileserver ~ # zpool import -f 9013303135438223804 > cannot import ''tank'': one or more devices is currently unavailable > Destroy and re-create the pool from > a backup source.That''s a major bummer. And I don''t think it''s caused by the log device, because as you say, zpool 22 > 19, which means your system supports log device removal. I think ... zpool status? Will show you which devices are "currently unavailable" right? I know "zpool status" will show the status of vdev''s, in a healthy pool. I just don''t know if the same is true for faulted pools.