I have several filesystems in a raid-z pool. All seem to be working correctly except one which is mounted but yields the following with ls -lh: ?--------- ? ? ? ? ? media I just finished scrubbing the pool and there are no errors. I can''t seem to do anything with the filesystem (cd, chown, etc) -brian -- This message posted from opensolaris.org
On Mon, 2010-08-02 at 08:48 -0700, Brian wrote:> I have several filesystems in a raid-z pool. All seem to be working correctly except one which is mounted but yields the following with ls -lh: > > ?--------- ? ? ? ? ? media > > I just finished scrubbing the pool and there are no errors. I can''t seem to do anything with the filesystem (cd, chown, etc) > > -brianBrian, File listings with questions marks in them has happened to me on snv_134 when I had multiple systems mounting the same LUN via COMSTAR iSCSI targets. The file system was actually formatted as ext2. ext2 is not meant to be iSCSI mounted on multiple systems in read-write mode and data can easily be corrupted because ext2 is not a clustered file system. Luckily, unmounting then mounting the file system on the affected hosts solved the problem and the LUN became readable. -- Thank you, Preston Connors Atlantic.Net
Thanks Preston. I am actually using ZFS locally, connected directly to 3 sata drives in a raid-z pool. The filesystem is ZFS and it mounts without complaint and the pool is clean. I am at a loss as to what is happening. -brian -- This message posted from opensolaris.org
Brian, You might try using zpool history -il to see what ZFS operations, if any, might have lead up to this problem. If zpool history doesn''t provide any clues, then what other operations might have occurred prior to this state? It looks like something trappled this file system... Thanks, Cindy On 08/02/10 10:26, Brian wrote:> Thanks Preston. I am actually using ZFS locally, connected directly to 3 sata drives in a raid-z pool. The filesystem is ZFS and it mounts without complaint and the pool is clean. I am at a loss as to what is happening. > > -brian
Cindy, Thanks for the quick response. Consulting ZFS history I note the following actions: "imported" my three disk raid-z pool originally created on the most "recent" version of OpenSolaris but now running NexantaStor 3.03 "upgraded" my pool "destroyed" two file systems I was no longer using (neither of these were of course the file system at issue) "destroyed" a snapshot on another filesystem played around with permissions (these were my only actions directly on the file system) None of these actions seemed to have a negative impact on the filesystem and it was working well when I gracefully shutdown (to physically move the computer). I am a bit at a loss. With copy-on-write and a clean pool how can I have corruption? -brian On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen < cindy.swearingen at oracle.com> wrote:> Brian, > > You might try using zpool history -il to see what ZFS operations, > if any, might have lead up to this problem. > > If zpool history doesn''t provide any clues, then what other > operations might have occurred prior to this state? > > It looks like something trappled this file system... > > Thanks, > > Cindy > > On 08/02/10 10:26, Brian wrote: > >> Thanks Preston. I am actually using ZFS locally, connected directly to 3 >> sata drives in a raid-z pool. The filesystem is ZFS and it mounts without >> complaint and the pool is clean. I am at a loss as to what is happening. >> -brian >> >-- Brian Merrell, Director of Technology Backstop LLP 1455 Pennsylvania Ave., N.W. Suite 400 Washington, D.C. 20004 202-628-BACK (2225) merrellb at backstopllp.com www.backstopllp.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100802/2b4fcd64/attachment.html>
Hi Brian, I don''t think data corruption occurs cleanly within a file system boundary. What kind of permission changes? This looks like the mode/permissions of this file system are messed up but something much worse happened. Thanks, Cindy On 08/02/10 11:07, Brian Merrell wrote:> Cindy, > > Thanks for the quick response. Consulting ZFS history I note the > following actions: > > "imported" my three disk raid-z pool originally created on the most > "recent" version of OpenSolaris but now running NexantaStor 3.03 > "upgraded" my pool > "destroyed" two file systems I was no longer using (neither of these > were of course the file system at issue) > "destroyed" a snapshot on another filesystem > played around with permissions (these were my only actions directly on > the file system) > > None of these actions seemed to have a negative impact on the filesystem > and it was working well when I gracefully shutdown (to physically move > the computer). > > I am a bit at a loss. With copy-on-write and a clean pool how can I > have corruption? > > -brian > > > > On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen > <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: > > Brian, > > You might try using zpool history -il to see what ZFS operations, > if any, might have lead up to this problem. > > If zpool history doesn''t provide any clues, then what other > operations might have occurred prior to this state? > > It looks like something trappled this file system... > > Thanks, > > Cindy > > On 08/02/10 10:26, Brian wrote: > > Thanks Preston. I am actually using ZFS locally, connected > directly to 3 sata drives in a raid-z pool. The filesystem is > ZFS and it mounts without complaint and the pool is clean. I am > at a loss as to what is happening. > -brian > > > > > -- > Brian Merrell, Director of Technology > Backstop LLP > 1455 Pennsylvania Ave., N.W. > Suite 400 > Washington, D.C. 20004 > 202-628-BACK (2225) > merrellb at backstopllp.com <mailto:merrellb at backstopllp.com> > www.backstopllp.com <http://www.backstopllp.com>
Hi Brian, is it still relevant? On 02.08.10 21:07, Brian Merrell wrote:> Cindy, > > Thanks for the quick response. Consulting ZFS history I note the > following actions: > > "imported" my three disk raid-z pool originally created on the most > "recent" version of OpenSolaris but now running NexantaStor 3.03Then we need to know what changes are there in NexentaStor 3.03 on top of build 134. Nexenta folks are reading this list, so I hope they''ll chime in. regards victor> "upgraded" my pool > "destroyed" two file systems I was no longer using (neither of these > were of course the file system at issue) > "destroyed" a snapshot on another filesystem > played around with permissions (these were my only actions directly on > the file system) > > None of these actions seemed to have a negative impact on the filesystem > and it was working well when I gracefully shutdown (to physically move > the computer). > > I am a bit at a loss. With copy-on-write and a clean pool how can I > have corruption? > > -brian > > > > On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen > <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: > > Brian, > > You might try using zpool history -il to see what ZFS operations, > if any, might have lead up to this problem. > > If zpool history doesn''t provide any clues, then what other > operations might have occurred prior to this state? > > It looks like something trappled this file system... > > Thanks, > > Cindy > > On 08/02/10 10:26, Brian wrote: > > Thanks Preston. I am actually using ZFS locally, connected > directly to 3 sata drives in a raid-z pool. The filesystem is > ZFS and it mounts without complaint and the pool is clean. I am > at a loss as to what is happening. > -brian > > > > > -- > Brian Merrell, Director of Technology > Backstop LLP > 1455 Pennsylvania Ave., N.W. > Suite 400 > Washington, D.C. 20004 > 202-628-BACK (2225) > merrellb at backstopllp.com <mailto:merrellb at backstopllp.com> > www.backstopllp.com <http://www.backstopllp.com> > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- -- Victor Latushkin phone: x11467 / +74959370467 TSC-Kernel EMEA mobile: +78957693012 Sun Services, Moscow blog: http://blogs.sun.com/vlatushkin Sun Microsystems
Thanks for the response Victor. It is certainly still relevant in the sense that I am hoping to recover the data (although I''ve been informed the odds are strongly against me) My understanding is that Nexenta has been backporting ZFS code changes post 134. I suppose that it could be an error they somehow introduced or perhaps I''ve found a unique codepath that is also relevant pre-134 as well. Earlier today I was able to send some zdb dump information to Cindy which hopefully will shed some light on the situation (I would be happy to send to you as well) -brian On Tue, Aug 17, 2010 at 10:37 AM, Victor Latushkin <Victor.Latushkin at sun.com> wrote:> Hi Brian, > > is it still relevant? > > > On 02.08.10 21:07, Brian Merrell wrote: > >> Cindy, >> >> Thanks for the quick response. Consulting ZFS history I note the >> following actions: >> >> "imported" my three disk raid-z pool originally created on the most >> "recent" version of OpenSolaris but now running NexantaStor 3.03 >> > > Then we need to know what changes are there in NexentaStor 3.03 on top of > build 134. Nexenta folks are reading this list, so I hope they''ll chime in. > > regards > victor > > > "upgraded" my pool >> "destroyed" two file systems I was no longer using (neither of these were >> of course the file system at issue) >> "destroyed" a snapshot on another filesystem >> played around with permissions (these were my only actions directly on the >> file system) >> >> None of these actions seemed to have a negative impact on the filesystem >> and it was working well when I gracefully shutdown (to physically move the >> computer). >> >> I am a bit at a loss. With copy-on-write and a clean pool how can I have >> corruption? >> >> -brian >> >> >> >> On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen < >> cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: >> >> Brian, >> >> You might try using zpool history -il to see what ZFS operations, >> if any, might have lead up to this problem. >> >> If zpool history doesn''t provide any clues, then what other >> operations might have occurred prior to this state? >> >> It looks like something trappled this file system... >> >> Thanks, >> >> Cindy >> >> On 08/02/10 10:26, Brian wrote: >> >> Thanks Preston. I am actually using ZFS locally, connected >> directly to 3 sata drives in a raid-z pool. The filesystem is >> ZFS and it mounts without complaint and the pool is clean. I am >> at a loss as to what is happening. >> -brian >> >> >> >> >> -- >> Brian Merrell, Director of Technology >> Backstop LLP >> 1455 Pennsylvania Ave., N.W. >> Suite 400 >> Washington, D.C. 20004 >> 202-628-BACK (2225) >> merrellb at backstopllp.com <mailto:merrellb at backstopllp.com> >> www.backstopllp.com <http://www.backstopllp.com> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > > -- > -- > Victor Latushkin phone: x11467 / +74959370467 > TSC-Kernel EMEA mobile: +78957693012 > Sun Services, Moscow blog: http://blogs.sun.com/vlatushkin > Sun Microsystems >-- Brian Merrell, Director of Technology Backstop LLP 1455 Pennsylvania Ave., N.W. Suite 400 Washington, D.C. 20004 202-628-BACK (2225) merrellb at backstopllp.com www.backstopllp.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100817/bd05a3d2/attachment.html>