Konstantin Kuklin
2013-Feb-17 14:46 UTC
[zfs-discuss] zfs raid1 error resilvering and mount
hi, i have raid1 on zfs with 2 device on pool first device died and boot from second not working... i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import http://puu.sh/2402E when i load zfs.ko and opensolaris.ko i see this message: Solaris: WARNING: Can''t open objset for zroot/var/crash Solaris: WARNING: Can''t open objset for zroot/var/crash zpool status: http://puu.sh/2405f resilvering freeze with: zpool status -v ............. zroot/usr:<0x28ff> zroot/usr:<0x29ff> zroot/usr:<0x2aff> zroot/var/crash:<0x0> root at Flash:/root # how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t remember) and mount other zfs points with my data -- ? ????????? ?????? ??????????. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20130217/556a3073/attachment.html>
Hmmm, zfs destroy -f zroot/var/crash ? Then you can try to zfs mount -a Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:> hi, i have raid1 on zfs with 2 device on pool > first device died and boot from second not working... > > i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import > http://puu.sh/2402E > > when i load zfs.ko and opensolaris.ko i see this message: > Solaris: WARNING: Can''t open objset for zroot/var/crash > Solaris: WARNING: Can''t open objset for zroot/var/crash > > zpool status: > http://puu.sh/2405f > > resilvering freeze with: > zpool status -v > ............. > zroot/usr:<0x28ff> > zroot/usr:<0x29ff> > zroot/usr:<0x2aff> > zroot/var/crash:<0x0> > root at Flash:/root # > > how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t > remember) and mount other zfs points with my data > -- > ? ????????? > ?????? ??????????. > _______________________________________________ > freebsd-fs at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
On 2013-02-17 15:46, Konstantin Kuklin wrote:> hi, i have raid1 on zfs with 2 device on pool > first device died and boot from second not working...You didn''t say which OS version created the pool (ultimately - which pool version is there) and I''m not sure about support of the zfs versions in that flash you linked to. Possibly, OI LiveCD might do you a better job - but maybe your disks got too corrupted in some cataclysm :( However, generally, recent implementations should have several useful "zpool import" flags: * forcing an import with rollback to an older pool state (-F) - which may be or not be more intact (up to 32 or 128 transactions); * import without automount (-N) * read-only import (-o ro) which should panic in a lot less cases and allows to evacuate readable data by at least cp/rsync * import without cachefile and/or relocated pool root mountpoint (-R /a) so as to, in particular, not damage the namespace of your system by this pool (not really relevant in case of livecd''s) Hopefully, you can either import without mounts and issue a "zfs destroy" of your offending dataset, or rollback (irreversible) to a working state. However, it is also possible that the corruption is among metadata. If you''re lucky and just the latest transaction got broken during the crash (i.e. disk firmware ignored queuing and caching hints, and wrote something out of order), then rollback by one or a few TXGs may point you to an older root of metadata tree which is not yet overwritten by newer transactions (note: this is not guaranteed by the OS, just probable) and does contain consistent metadata in at least one copy of each of the metadata blocks. Breakage in /var/crash remotely suggests that your system tried to either create a dump (kernel panic) or more likely process one (via savecore in case of Solaris), and failed during this procedure in a mid-write.> > i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import > http://puu.sh/2402E > > when i load zfs.ko and opensolaris.ko i see this message: > Solaris: WARNING: Can''t open objset for zroot/var/crash > Solaris: WARNING: Can''t open objset for zroot/var/crash > > zpool status: > http://puu.sh/2405f > > resilvering freeze with: > zpool status -v > ............. > zroot/usr:<0x28ff> > zroot/usr:<0x29ff> > zroot/usr:<0x2aff> > zroot/var/crash:<0x0> > root at Flash:/root # > > how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t > remember) and mount other zfs points with my data > -- > ? ????????? > ?????? ??????????.Good luck, //Jim Klimov
Also, adding to my recent post: instead of resilvering, try to run "zpool scrub" first - it should verify all checksums and repair whatever it can via redundancy (for metadata - extra copies). Resilver is similar to scrub, but it has its other goals and implementation, and might be not so forgiving about pool errors. //Jim
Konstantin Kuklin
2013-Feb-18 07:48 UTC
[zfs-discuss] zfs raid1 error resilvering and mount
i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs list empty 2013/2/17 Fleuriot Damien <ml at my.gd>:> Hmmm, zfs destroy -f zroot/var/crash ? > > Then you can try to zfs mount -a > > > > Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. > > > On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: > >> hi, i have raid1 on zfs with 2 device on pool >> first device died and boot from second not working... >> >> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >> http://puu.sh/2402E >> >> when i load zfs.ko and opensolaris.ko i see this message: >> Solaris: WARNING: Can''t open objset for zroot/var/crash >> Solaris: WARNING: Can''t open objset for zroot/var/crash >> >> zpool status: >> http://puu.sh/2405f >> >> resilvering freeze with: >> zpool status -v >> ............. >> zroot/usr:<0x28ff> >> zroot/usr:<0x29ff> >> zroot/usr:<0x2aff> >> zroot/var/crash:<0x0> >> root at Flash:/root # >> >> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >> remember) and mount other zfs points with my data >> -- >> ? ????????? >> ?????? ??????????. >> _______________________________________________ >> freebsd-fs at freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >-- ? ????????? ?????? ??????????.
Reassure me here, you''ve replaced your failed vdev before trying to resilver right ? Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive. On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs > list empty > > 2013/2/17 Fleuriot Damien <ml at my.gd>: >> Hmmm, zfs destroy -f zroot/var/crash ? >> >> Then you can try to zfs mount -a >> >> >> >> Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. >> >> >> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >> >>> hi, i have raid1 on zfs with 2 device on pool >>> first device died and boot from second not working... >>> >>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >>> http://puu.sh/2402E >>> >>> when i load zfs.ko and opensolaris.ko i see this message: >>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>> >>> zpool status: >>> http://puu.sh/2405f >>> >>> resilvering freeze with: >>> zpool status -v >>> ............. >>> zroot/usr:<0x28ff> >>> zroot/usr:<0x29ff> >>> zroot/usr:<0x2aff> >>> zroot/var/crash:<0x0> >>> root at Flash:/root # >>> >>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >>> remember) and mount other zfs points with my data >>> -- >>> ? ????????? >>> ?????? ??????????. >>> _______________________________________________ >>> freebsd-fs at freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >> > > > > -- > ? ????????? > ?????? ??????????.
Konstantin Kuklin
2013-Feb-19 11:39 UTC
[zfs-discuss] zfs raid1 error resilvering and mount
i did`t replace disk, after reboot system not started (zfs installed as default root system) and i boot from another system(from flash) and resilvering has auto start and show me warnings with freeze progress(dead on checking zroot/var/crash ) replacing dead disk healing var/crash with <0x0> adress? 2013/2/18 Fleuriot Damien <ml at my.gd>:> Reassure me here, you''ve replaced your failed vdev before trying to resilver right ? > > Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive. > > > On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: > >> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs >> list empty >> >> 2013/2/17 Fleuriot Damien <ml at my.gd>: >>> Hmmm, zfs destroy -f zroot/var/crash ? >>> >>> Then you can try to zfs mount -a >>> >>> >>> >>> Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. >>> >>> >>> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>> >>>> hi, i have raid1 on zfs with 2 device on pool >>>> first device died and boot from second not working... >>>> >>>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >>>> http://puu.sh/2402E >>>> >>>> when i load zfs.ko and opensolaris.ko i see this message: >>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>> >>>> zpool status: >>>> http://puu.sh/2405f >>>> >>>> resilvering freeze with: >>>> zpool status -v >>>> ............. >>>> zroot/usr:<0x28ff> >>>> zroot/usr:<0x29ff> >>>> zroot/usr:<0x2aff> >>>> zroot/var/crash:<0x0> >>>> root at Flash:/root # >>>> >>>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >>>> remember) and mount other zfs points with my data >>>> -- >>>> ? ????????? >>>> ?????? ??????????. >>>> _______________________________________________ >>>> freebsd-fs at freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >>> >> >> >> >> -- >> ? ????????? >> ?????? ??????????. >-- ? ????????? ?????? ??????????.
If I understand you correctly, you have: - booted another system from flash - NOT replaced the failed device - under this booted system, resilvering takes place automatically While I cannot tell why ZFS tries to resilver without a new, proper device, I think it will only work once you''ve replaced the failed device. Could you try replacing the failed drive ? On Feb 19, 2013, at 12:39 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:> i did`t replace disk, after reboot system not started (zfs installed > as default root system) and i boot from another system(from flash) and > resilvering has auto start and show me warnings with freeze > progress(dead on checking zroot/var/crash ) > replacing dead disk healing var/crash with <0x0> adress? > > 2013/2/18 Fleuriot Damien <ml at my.gd>: >> Reassure me here, you''ve replaced your failed vdev before trying to resilver right ? >> >> Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive. >> >> >> On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >> >>> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs >>> list empty >>> >>> 2013/2/17 Fleuriot Damien <ml at my.gd>: >>>> Hmmm, zfs destroy -f zroot/var/crash ? >>>> >>>> Then you can try to zfs mount -a >>>> >>>> >>>> >>>> Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. >>>> >>>> >>>> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>> >>>>> hi, i have raid1 on zfs with 2 device on pool >>>>> first device died and boot from second not working... >>>>> >>>>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >>>>> http://puu.sh/2402E >>>>> >>>>> when i load zfs.ko and opensolaris.ko i see this message: >>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>> >>>>> zpool status: >>>>> http://puu.sh/2405f >>>>> >>>>> resilvering freeze with: >>>>> zpool status -v >>>>> ............. >>>>> zroot/usr:<0x28ff> >>>>> zroot/usr:<0x29ff> >>>>> zroot/usr:<0x2aff> >>>>> zroot/var/crash:<0x0> >>>>> root at Flash:/root # >>>>> >>>>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >>>>> remember) and mount other zfs points with my data >>>>> -- >>>>> ? ????????? >>>>> ?????? ??????????. >>>>> _______________________________________________ >>>>> freebsd-fs at freebsd.org mailing list >>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >>>> >>> >>> >>> >>> -- >>> ? ????????? >>> ?????? ??????????. >> > > > > -- > ? ????????? > ?????? ??????????.
On 2013-02-19 12:39, Konstantin Kuklin wrote:> i did`t replace disk, after reboot system not started (zfs installed > as default root system) and i boot from another system(from flash) and > resilvering has auto start and show me warnings with freeze > progress(dead on checking zroot/var/crash )Well, in this case try again with "zpool import" options I''ve described earlier, and "zpool scrub" to try to inspect and repair the pool state you have now. You might want to disconnect the "broken" disk for now, since resilvering would try to overwrite it anyway (whole disk, or just differences if it is found to have a valid label ending at an earlier TXG number).> replacing dead disk healing var/crash with <0x0> adress?Probably not, since your pool''s only copy has an error in it. 0x0 is a metadata block (dataset root or close to that), so an error in it is usually fatal (is for most dataset types). Possibly, an import with rollback can return your pool to state where another blockpointer tree version points to a different (older) block as this dataset''s 0x0 and that would be valid. But if you''ve already imported the pool and it ran for a while, chances are that your older possibly better intact TXGs are no longer referencable (rolled out of the ring buffer forever). Good luck, //Jim
Konstantin Kuklin
2013-Feb-19 13:24 UTC
[zfs-discuss] zfs raid1 error resilvering and mount
zfs set canmount=off zroot/var/crash i can`t do this, because zfs list empty 2013/2/19 Fleuriot Damien <ml at my.gd>:> The thing is, perhaps you have corrupted blocks that weren''t caught either by ZFS or your drives'' firmware, preventing the pool''s operation. > > Seeing zroot/var/crash is the problem, could you try: > > 1/ booting from a live CD or flash > 2/ NOT start a resilver > 3/ run the command: > zfs set canmount=off zroot/var/crash > > > This should prevent /var/crash from trying to be mounted from the ZFS pool. > > Perhaps this''ll allow you to get further through the boot process and perhaps even start your ZFS pool correctly. > > > > On Feb 19, 2013, at 12:52 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: > >> you understand me right, but my problem not in dead device... raid1 >> must work correctly with 1 device and command to replace or something >> else not work, just freeze >> i have only 2 warning about crash fs zroot/var/crash and thats all >> have any idea, how i can repair it without default zfs tools like zfs, zpool? >> >> >> 2013/2/19 Fleuriot Damien <ml at my.gd>: >>> If I understand you correctly, you have: >>> - booted another system from flash >>> - NOT replaced the failed device >>> - under this booted system, resilvering takes place automatically >>> >>> >>> While I cannot tell why ZFS tries to resilver without a new, proper device, I think it will only work once you''ve replaced the failed device. >>> >>> Could you try replacing the failed drive ? >>> >>> >>> On Feb 19, 2013, at 12:39 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>> >>>> i did`t replace disk, after reboot system not started (zfs installed >>>> as default root system) and i boot from another system(from flash) and >>>> resilvering has auto start and show me warnings with freeze >>>> progress(dead on checking zroot/var/crash ) >>>> replacing dead disk healing var/crash with <0x0> adress? >>>> >>>> 2013/2/18 Fleuriot Damien <ml at my.gd>: >>>>> Reassure me here, you''ve replaced your failed vdev before trying to resilver right ? >>>>> >>>>> Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive. >>>>> >>>>> >>>>> On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>>> >>>>>> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs >>>>>> list empty >>>>>> >>>>>> 2013/2/17 Fleuriot Damien <ml at my.gd>: >>>>>>> Hmmm, zfs destroy -f zroot/var/crash ? >>>>>>> >>>>>>> Then you can try to zfs mount -a >>>>>>> >>>>>>> >>>>>>> >>>>>>> Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. >>>>>>> >>>>>>> >>>>>>> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>>>>> >>>>>>>> hi, i have raid1 on zfs with 2 device on pool >>>>>>>> first device died and boot from second not working... >>>>>>>> >>>>>>>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >>>>>>>> http://puu.sh/2402E >>>>>>>> >>>>>>>> when i load zfs.ko and opensolaris.ko i see this message: >>>>>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>>>>> >>>>>>>> zpool status: >>>>>>>> http://puu.sh/2405f >>>>>>>> >>>>>>>> resilvering freeze with: >>>>>>>> zpool status -v >>>>>>>> ............. >>>>>>>> zroot/usr:<0x28ff> >>>>>>>> zroot/usr:<0x29ff> >>>>>>>> zroot/usr:<0x2aff> >>>>>>>> zroot/var/crash:<0x0> >>>>>>>> root at Flash:/root # >>>>>>>> >>>>>>>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >>>>>>>> remember) and mount other zfs points with my data >>>>>>>> -- >>>>>>>> ? ????????? >>>>>>>> ?????? ??????????. >>>>>>>> _______________________________________________ >>>>>>>> freebsd-fs at freebsd.org mailing list >>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> ? ????????? >>>>>> ?????? ??????????. >>>>> >>>> >>>> >>>> >>>> -- >>>> ? ????????? >>>> ?????? ??????????. >>> >> >> >> >> -- >> ? ????????? >> ?????? ??????????. >-- ? ????????? ?????? ??????????.
Well I can''t see anything else to help you, except trying to replace your failed vdev and resilver from there? On Feb 19, 2013, at 2:24 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote:> zfs set canmount=off zroot/var/crash > > i can`t do this, because zfs list empty > > 2013/2/19 Fleuriot Damien <ml at my.gd>: >> The thing is, perhaps you have corrupted blocks that weren''t caught either by ZFS or your drives'' firmware, preventing the pool''s operation. >> >> Seeing zroot/var/crash is the problem, could you try: >> >> 1/ booting from a live CD or flash >> 2/ NOT start a resilver >> 3/ run the command: >> zfs set canmount=off zroot/var/crash >> >> >> This should prevent /var/crash from trying to be mounted from the ZFS pool. >> >> Perhaps this''ll allow you to get further through the boot process and perhaps even start your ZFS pool correctly. >> >> >> >> On Feb 19, 2013, at 12:52 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >> >>> you understand me right, but my problem not in dead device... raid1 >>> must work correctly with 1 device and command to replace or something >>> else not work, just freeze >>> i have only 2 warning about crash fs zroot/var/crash and thats all >>> have any idea, how i can repair it without default zfs tools like zfs, zpool? >>> >>> >>> 2013/2/19 Fleuriot Damien <ml at my.gd>: >>>> If I understand you correctly, you have: >>>> - booted another system from flash >>>> - NOT replaced the failed device >>>> - under this booted system, resilvering takes place automatically >>>> >>>> >>>> While I cannot tell why ZFS tries to resilver without a new, proper device, I think it will only work once you''ve replaced the failed device. >>>> >>>> Could you try replacing the failed drive ? >>>> >>>> >>>> On Feb 19, 2013, at 12:39 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>> >>>>> i did`t replace disk, after reboot system not started (zfs installed >>>>> as default root system) and i boot from another system(from flash) and >>>>> resilvering has auto start and show me warnings with freeze >>>>> progress(dead on checking zroot/var/crash ) >>>>> replacing dead disk healing var/crash with <0x0> adress? >>>>> >>>>> 2013/2/18 Fleuriot Damien <ml at my.gd>: >>>>>> Reassure me here, you''ve replaced your failed vdev before trying to resilver right ? >>>>>> >>>>>> Your zpool status suggests otherwise, so I only want to make sure this is a status from before replacing your drive. >>>>>> >>>>>> >>>>>> On Feb 18, 2013, at 8:48 AM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>>>> >>>>>>> i can`t do it, because resilvering in progress(freeze on 0.1%) and zfs >>>>>>> list empty >>>>>>> >>>>>>> 2013/2/17 Fleuriot Damien <ml at my.gd>: >>>>>>>> Hmmm, zfs destroy -f zroot/var/crash ? >>>>>>>> >>>>>>>> Then you can try to zfs mount -a >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Removing pjd and mm from cc, if they want to read your message they''re old enough to check their ML subscription. >>>>>>>> >>>>>>>> >>>>>>>> On Feb 17, 2013, at 3:46 PM, Konstantin Kuklin <konstantin.kuklin at gmail.com> wrote: >>>>>>>> >>>>>>>>> hi, i have raid1 on zfs with 2 device on pool >>>>>>>>> first device died and boot from second not working... >>>>>>>>> >>>>>>>>> i try to get http://mfsbsd.vx.sk/ flash and load from it with zpool import >>>>>>>>> http://puu.sh/2402E >>>>>>>>> >>>>>>>>> when i load zfs.ko and opensolaris.ko i see this message: >>>>>>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>>>>>> Solaris: WARNING: Can''t open objset for zroot/var/crash >>>>>>>>> >>>>>>>>> zpool status: >>>>>>>>> http://puu.sh/2405f >>>>>>>>> >>>>>>>>> resilvering freeze with: >>>>>>>>> zpool status -v >>>>>>>>> ............. >>>>>>>>> zroot/usr:<0x28ff> >>>>>>>>> zroot/usr:<0x29ff> >>>>>>>>> zroot/usr:<0x2aff> >>>>>>>>> zroot/var/crash:<0x0> >>>>>>>>> root at Flash:/root # >>>>>>>>> >>>>>>>>> how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t >>>>>>>>> remember) and mount other zfs points with my data >>>>>>>>> -- >>>>>>>>> ? ????????? >>>>>>>>> ?????? ??????????. >>>>>>>>> _______________________________________________ >>>>>>>>> freebsd-fs at freebsd.org mailing list >>>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>>>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org" >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ? ????????? >>>>>>> ?????? ??????????. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> ? ????????? >>>>> ?????? ??????????. >>>> >>> >>> >>> >>> -- >>> ? ????????? >>> ?????? ??????????. >> > > > > -- > ? ????????? > ?????? ??????????.
On 2013-02-19 14:24, Konstantin Kuklin wrote:> zfs set canmount=off zroot/var/crash > > i can`t do this, because zfs list emptyI''d argue that in your case it might be desirable to evacuate data and reinstall the OS - just to be certain that ZFS on-disk structures on new installation have no defects. To evacuate data, a read-only import would suffice: # zpool import -f -N -R /a -o ro zroot This should import the pool without mounting its datasets (-N). Using "zfs mount zpool/ROOT/myrootfsname" and so on you can mount just the datasets which hold your valuable data individually (under ''/a'' in this example), and rsync it to some other storage. After you''ve saved your data, you can try to "repair" the pool by roll back: # zpool export zpool # zpool import -F -f -N -R /a zroot This should try to roll back 10 transaction sets or so, possibly giving you an intact state of ZFS data structures and a usable pool. Maybe not. //Jim
Victor Latushkin
2013-Feb-19 16:02 UTC
[zfs-discuss] zfs raid1 error resilvering and mount
On 2/19/13 6:32 AM, Jim Klimov wrote:> On 2013-02-19 14:24, Konstantin Kuklin wrote: >> zfs set canmount=off zroot/var/crash >> >> i can`t do this, because zfs list empty > > > I''d argue that in your case it might be desirable to evacuate data and > reinstall the OS - just to be certain that ZFS on-disk structures on > new installation have no defects. > > To evacuate data, a read-only import would suffice:This is a good idea but ..> # zpool import -f -N -R /a -o ro zrootThis command will not achieve readonly import. For readonly import one needs to use ''zpool import -o readonly=on <poolname>'' as ''zpool import -o ro <poolname>'' will import in R/W mode and just mount filesystems readonly. Feel free to add other options (-f, -N, etc) as needed.> > This should import the pool without mounting its datasets (-N). > Using "zfs mount zpool/ROOT/myrootfsname" and so on you can mount just > the datasets which hold your valuable data individually (under ''/a'' in > this example), and rsync it to some other storage. > > After you''ve saved your data, you can try to "repair" the pool by roll > back: > > # zpool export zpool > # zpool import -F -f -N -R /a zroot > > This should try to roll back 10 transaction sets or so, possibly giving > you an intact state of ZFS data structures and a usable pool. Maybe not. > > //Jim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2013-02-19 17:02, Victor Latushkin wrote:> On 2/19/13 6:32 AM, Jim Klimov wrote: >> On 2013-02-19 14:24, Konstantin Kuklin wrote: >>> zfs set canmount=off zroot/var/crash >>> >>> i can`t do this, because zfs list empty >> >> >> I''d argue that in your case it might be desirable to evacuate data and >> reinstall the OS - just to be certain that ZFS on-disk structures on >> new installation have no defects. >> >> To evacuate data, a read-only import would suffice: > > This is a good idea but .. > > >> # zpool import -f -N -R /a -o ro zroot > > This command will not achieve readonly import. > > For readonly import one needs to use ''zpool import -o readonly=on > <poolname>'' as ''zpool import -o ro <poolname>'' will import in R/W mode > and just mount filesystems readonly.Oops, my bad. Do what the guru says! Really, I was mistaken in this fast-typing ;)> Feel free to add other options (-f, -N, etc) as needed. > > >> >> This should import the pool without mounting its datasets (-N). >> Using "zfs mount zpool/ROOT/myrootfsname" and so on you can mount just >> the datasets which hold your valuable data individually (under ''/a'' in >> this example), and rsync it to some other storage. >> >> After you''ve saved your data, you can try to "repair" the pool by roll >> back: >> >> # zpool export zpool >> # zpool import -F -f -N -R /a zroot >> >> This should try to roll back 10 transaction sets or so, possibly giving >> you an intact state of ZFS data structures and a usable pool. Maybe not. >> >> //Jim >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- +============================================================+ | | | ?????? ???????, Jim Klimov | | ??????????? ???????? CTO | | ??? "??? ? ??" JSC COS&HT | | | | +7-903-7705859 (cellular) mailto:jimklimov at cos.ru | | CC:admin at cos.ru,jimklimov at gmail.com | +============================================================+ | () ascii ribbon campaign - against html mail | | /\ - against microsoft attachments | +============================================================+