IMPORTANT: This message is private and confidential. If you have received this
message in error, please notify us and remove it from your system.
Hello,
I have two USB-drives connected to my PC with an zpool on each, one
called TANK, the other IOMEGA. After some problems this morning I
managed to get the IOMEGA-pool to work but have less luck with the
TANK-pool. -When I run "zpool import" and would expect to get some
state
of "TANK" I instead get
" pool: IOMEGA
id: 9922963935057378355
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported
using
the ''-f'' flag.
see: http://www.sun.com/msg/ZFS-8000-72
config:
IOMEGA FAULTED corrupted data
c4t0d0 ONLINE"
---------------
When running an zpool status I get this:
" pool: IOMEGA
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
IOMEGA ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t0d0s0 ONLINE 0 0 0
c6t2d0s0 ONLINE 0 0 0"
In other words the actual IOMEGA-pool appears on a drive which is c8t0d0
and the pool is marked as OK, but then the USB-drive on C4t0d0 appears
to have an zpool called IOMEGA as well although it really contains the
TANK-pool!
What really worries me is that zfs for some reason has started to treat
a drive which belonged to one pool as if it was belonging to another
pool. Could this happen with other non-USB drives in other configuration
scenarios such as mirrors or raidz?
I suppose anything can happen on Friday the 13th...
Cheers,
Stefan Olsson
--------------------------------------------------------
Xtratherm Limited is a limited company registered in Ireland. Registered number:
331130. Registered office: Kells Road, Navan, Co. Meath. Directors: D.E. Hynes,
E.J. Hynes, S.K. Steenson, J. Keegan, B. Rafferty, T. Hynes. VAT Registration:
IE6351130B
Xtratherm UK Limited is a limited company registered in England and Wales.
Registered number: 4404208. Registered office: Park Road, Holmewood Industrial
Park, Chesterfield, Derbyshire, S42 5UY. VAT Registration: GB787574856
Please note that [Xtratherm Limited/Xtratherm UK Limited] may monitor e-mail
traffic data and content of e-mail for the purpose of security and training.
This message (and any associated files) is intended only for the use of
zfs-discuss at opensolaris.org and may contain information that is confidential,
subject to copyright or constitutes a trade secret. If you are not zfs-discuss
at opensolaris.org you are hereby notified that any dissemination, copying or
distribution of this message, or files associated with this message, is strictly
prohibited. If you have received this message in error, please notify us
immediately by replying to the message and deleting it from your computer. Any
views or opinions presented are solely those of the author Stefan.Olsson at
xtratherm.com and do not necessarily represent those of the company.
comment below... Stefan Olsson wrote:> IMPORTANT: This message is private and confidential. If you have received this message in error, please notify us and remove it from your system. >please notify your lawyers that this message is now on the internet and publically archived forever :-)> > > Hello, > > I have two USB-drives connected to my PC with an zpool on each, one > called TANK, the other IOMEGA. After some problems this morning I > managed to get the IOMEGA-pool to work but have less luck with the > TANK-pool. -When I run "zpool import" and would expect to get some state > of "TANK" I instead get > " pool: IOMEGA > id: 9922963935057378355 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on another system, but can be imported > using > the ''-f'' flag. > see: http://www.sun.com/msg/ZFS-8000-72 > config: > > IOMEGA FAULTED corrupted data > c4t0d0 ONLINE" > --------------- > When running an zpool status I get this: > " pool: IOMEGA > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > IOMEGA ONLINE 0 0 0 > c8t0d0 ONLINE 0 0 0 > > errors: No known data errors > > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c6t0d0s0 ONLINE 0 0 0 > c6t2d0s0 ONLINE 0 0 0" > > In other words the actual IOMEGA-pool appears on a drive which is c8t0d0 > and the pool is marked as OK, but then the USB-drive on C4t0d0 appears > to have an zpool called IOMEGA as well although it really contains the > TANK-pool! >ZFS maintains a cache of what pools were imported so that at boot time, it will automatically try to re-import the pool. The file is /etc/zfs/zpool.cache and you can view its contents by using "zdb -C" If the current state of affairs does not match the cache, then you can export the pool, which will clear its entry in the cache. Then retry the import. -- richard> What really worries me is that zfs for some reason has started to treat > a drive which belonged to one pool as if it was belonging to another > pool. Could this happen with other non-USB drives in other configuration > scenarios such as mirrors or raidz? > > I suppose anything can happen on Friday the 13th... > > Cheers, > > Stefan Olsson > >
Richard Elling schrieb: [...]> ZFS maintains a cache of what pools were imported so that at boot time, > it will automatically try to re-import the pool. The file is > /etc/zfs/zpool.cache > and you can view its contents by using "zdb -C" > > If the current state of affairs does not match the cache, then you can > export the pool, which will clear its entry in the cache. Then retry the > import. > -- richardI had this problem myself with a mirrored zpool in a ICY BOX IB-3218 (2 HDDs which appear as different LUNs) set up for backup purposes. For zpool which are intended to be disconnect (or powered of) regulary an ''autoexport'' flag would be nice: If set the system exports the pool at shutdown. This would prevent problems like Stefan''s on a reboot and when a zpool from a shutdown system is connected to an other system (like "Hm, old slow laptop''s powered off, but hey, everything I need is also on this shiny 1.5TB USB-HDD-zpool with all my other important stuff/backups.. *plug into workstation* OMG! My backup-pool is faulty!!") Regards, Florian Ermisch>> What really worries me is that zfs for some reason has started to treat >> a drive which belonged to one pool as if it was belonging to another >> pool. Could this happen with other non-USB drives in other configuration >> scenarios such as mirrors or raidz? >> I suppose anything can happen on Friday the 13th... >> Cheers, >> >> Stefan Olsson >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Florian Ermisch wrote:> Richard Elling schrieb: > [...] >> ZFS maintains a cache of what pools were imported so that at boot time, >> it will automatically try to re-import the pool. The file is >> /etc/zfs/zpool.cache >> and you can view its contents by using "zdb -C" >> >> If the current state of affairs does not match the cache, then you can >> export the pool, which will clear its entry in the cache. Then retry >> the >> import. >> -- richard > > I had this problem myself with a mirrored zpool in a ICY BOX IB-3218 > (2 HDDs which appear as different LUNs) set up for backup purposes. > For zpool which are intended to be disconnect (or powered of) regulary > an ''autoexport'' flag would be nice: If set the system exports the pool > at shutdown. This would prevent problems like Stefan''s on a reboot and > when a zpool from a shutdown system is connected to an other system > (like "Hm, old slow laptop''s powered off, but hey, everything I need > is also on this shiny 1.5TB USB-HDD-zpool with all my other important > stuff/backups.. *plug into workstation* OMG! My backup-pool is faulty!!")There is a zpool parameter, "cachefile" which will effectively do this. Yes, I think this is a good idea for removable media. -- richard
On Sat, Mar 14, 2009 at 8:25 PM, Richard Elling <richard.elling at gmail.com>wrote:> Florian Ermisch wrote: > >> Richard Elling schrieb: >> [...] >> >>> ZFS maintains a cache of what pools were imported so that at boot time, >>> it will automatically try to re-import the pool. The file is >>> /etc/zfs/zpool.cache >>> and you can view its contents by using "zdb -C" >>> >>> If the current state of affairs does not match the cache, then you can >>> export the pool, which will clear its entry in the cache. Then retry the >>> import. >>> -- richard >>> >> >> I had this problem myself with a mirrored zpool in a ICY BOX IB-3218 (2 >> HDDs which appear as different LUNs) set up for backup purposes. >> For zpool which are intended to be disconnect (or powered of) regulary an >> ''autoexport'' flag would be nice: If set the system exports the pool at >> shutdown. This would prevent problems like Stefan''s on a reboot and when a >> zpool from a shutdown system is connected to an other system (like "Hm, old >> slow laptop''s powered off, but hey, everything I need is also on this shiny >> 1.5TB USB-HDD-zpool with all my other important stuff/backups.. *plug into >> workstation* OMG! My backup-pool is faulty!!") >> > > There is a zpool parameter, "cachefile" which will effectively do this. > Yes, I think this is a good idea for removable media. > -- richard > > >I''ve actually experienced this bug on non-removable media - in particular, if I have a pool exported, and I reboot, on import, it will see two possible pools - one using either the slice/partition boundaries on the drive, and one using the "whole disk" [which is what I used when I made the pool.] They have different unique IDs, and obviously attempting to import one of them results in the report that the metadata is corrupt and cannot be recovered...but how is it that ZFS fails to detect that one of these pools is impossible? [Yes, I realize that removable media isn''t really a special case - as far as ZFS is concerned, it''s all disks.] - Rich -- Linux is obsolete -- Andrew Tanenbaum -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090323/95181b9b/attachment.html>