Chookiex
2007-Nov-13 17:32 UTC
[zfs-discuss] zpool status can not detect the vdev removed?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares /export/f4.dat AVAIL errors: No known data errors after this, I run "rm /export/f1.dat", and I write something, the write operation is normal, but when I check the status of zpool, it hadn''t told me any exception, but the file f1.dat is really removed! and when I scrub the pool, Solaris reboot... what should I consider this? If the system would reboot when I get off a disk from the pool? ____________________________________________________________________________________ Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now. http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071113/031644b6/attachment.html>
hex.cookie
2007-Nov-13 17:39 UTC
[zfs-discuss] zpool status can not detect the vdev removed?
I make a file zpool like this: bash-3.00# zpool status pool: filepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM filepool ONLINE 0 0 0 /export/f1.dat ONLINE 0 0 0 /export/f2.dat ONLINE 0 0 0 /export/f3.dat ONLINE 0 0 0 spares /export/f4.dat AVAIL errors: No known data errors after this, I run "rm /export/f1.dat", and I write something, the write operation is normal, but when I check the status of zpool, it hadn''t told me any exception, but the file f1.dat is really removed! and when I scrub the pool, Solaris reboot... what should I consider this? If the system would reboot when I get off a disk from the pool? -------------------------------------------------------------------------------- Be a better pen pal. Text or chat with friends inside Yahoo! Mail. See how. This message posted from opensolaris.org
Eric Schrock
2007-Nov-13 17:58 UTC
[zfs-discuss] zpool status can not detect the vdev removed?
As with any application, if you hold the vnode (or file descriptor) open and remove the underlying file, you can still write to the file even if it is removed. Removing the file only removes it from the namespace; until the last reference is closed it will continue to exist. You can use ''zpool online'' to trigger a reopen of the device. If you''re running a recent build of Nevada, you are better off using lofi devices to simulate device removal, as it is much closer to the real thing. - Eric On Tue, Nov 13, 2007 at 09:39:27AM -0800, hex.cookie wrote:> I make a file zpool like this: > bash-3.00# zpool status > pool: filepool > state: ONLINE > scrub: none requested > config: > NAME STATE READ WRITE CKSUM > filepool ONLINE 0 0 0 > /export/f1.dat ONLINE 0 0 0 > /export/f2.dat ONLINE 0 0 0 > /export/f3.dat ONLINE 0 0 0 > spares > /export/f4.dat AVAIL > errors: No known data errors > > after this, I run "rm /export/f1.dat", and I write something, the write operation is normal, but when I check the status of zpool, it hadn''t told me any exception, but the file f1.dat is really removed! > > and when I scrub the pool, Solaris reboot... > what should I consider this? If the system would reboot when I get off a disk from the pool? > > > -------------------------------------------------------------------------------- > Be a better pen pal. Text or chat with friends inside Yahoo! Mail. See how. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, FishWorks http://blogs.sun.com/eschrock
hex.cookie
2007-Nov-14 05:21 UTC
[zfs-discuss] zpool status can not detect the vdev removed?
and when the system is reboot, I run zpool status, status told me that one vdev is corrupt, and I recreate the file what I had removed. After all those operation, I run zpool destroy pool, the system reboot again...... should solaris do it? This message posted from opensolaris.org