Displaying 14 results from an estimated 14 matches for "zfs_recov".
Did you mean:
zfs_recv
2010 Sep 14
3
How to set zfs:zfs_recover=1 and aok=1 in GRUB at startup?
I can''t edit now my /etc/system file because system is not booting.
Is there a way to force this parameters to Solaris kernel on booting with Grub?
--
This message posted from opensolaris.org
2009 Oct 31
1
Kernel panic on zfs import
...spa?threadID=49020. Zdb
seems to be ... doing something. Not sure _what_ it''s doing, but it
can''t be making things worse for me right?
I''m going to try adding the following to /etc/system, as mentioned
here: http://opensolaris.org/jive/thread.jspa?threadID=114906
set zfs:zfs_recover=1
set aok=1
Suggestions?
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
...gt;> can''t be making things worse for me right?
>
> Yes, zdb only reads, so it cannot make thing worse.
>
>> I''m going to try adding the following to /etc/system, as mentioned
>> here: http://opensolaris.org/jive/thread.jspa?threadID=114906
>> set zfs:zfs_recover=1
>> set aok=1
>
> Please do not rush with these settings. Let''s look at the stack backtrace
> first.
>
> Regards,
> Victor
>
I think I''ve found the cause of my problem. I disconnected one side of
each mirror, rebooted, and imported. The system didn&...
2010 Aug 18
1
Kernel panic on import / interrupted zfs destroy
.../ reboot loop.
I was able to get in with milestone=none and delete the zfs cache, but now I have a new problem: Any attempt to import the pool results in a panic. I have tried from my snv_134 install, from the live cd, and from nexenta. I have tried various zdb incantations (with aok=1 and zfs:zfs_recover=1), to no avail - these error out after a few minutes. I have even tried another controller.
I have zdb -e -bcsvL running now from 134 (without aok=1) which has been running for several hours. Can zdb recover from this kind of situation (with a half-destroyed filesystem that panics the kernel...
2010 Sep 24
3
Kernel panic on ZFS import - how do I recover?
I posted this on the www.nexentastor.org forums, but no answer so far, so I apologize if you are seeing this twice. I am also engaged with nexenta support, but was hoping to get some additional insights here.
I am running nexenta 3.0.3 community edition, based on 134. The box crashed yesterday, and goes into a reboot loop (kernel panic) when trying to import my data pool, screenshot attached.
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else
who has seen it, as well as comments/speculation on cause.
This bug is pretty bad. If you are lucky you can import the pool read-only
and migrate it elsewhere.
I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results.
http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
Hardware platform:
Supermicro X8DAH
144GB ram
Supermicro sas2 jbods
LSI 9200-8e controllers (Phase 13 fw)
Zuesram log
ZuesIops sas l2arc
Seagate ST33000650SS sas drives
All four servers...
2010 Sep 18
6
space_map again nuked!!
I''m really angry against ZFS:
My server no more reboots because the ZFS spacemap is again corrupt.
I just replaced the whole spacemap by recreating a new zpool from scratch and copying back the data with "zfs send & zfs receive".
Did it copied corrupt spacemap?!
For me its now terminated. I loss to much time and money with this experimental filesystem.
My version is Zpool
2010 Jun 02
11
ZFS recovery tools
Hi,
I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to
learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks
to some great forum posts from Victor Latushkin, however without his posts I would still be crying
at night...
I think the worst example is the zdb man page, which all it does is to ask you
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
...ck mounts, readonly and "-n"), but so far
all attempts led to to the computer hanging within a minute
("vmstat 1" shows that free RAM plummets towards the zero
mark).
I''ve tried preparing the system tunables as well:
:; echo "aok/W 1" | mdb -kw
:; echo "zfs_recover/W 1" | mdb -kw
and sometimes adding:
:; echo zfs_vdev_max_pending/W0t5 | mdb -kw
:; echo zfs_resilver_delay/W0t0 | mdb -kw
:; echo zfs_resilver_min_time_ms/W0t20000 | mdb -kw
:; echo zfs_txg_synctime/W0t1 | mdb -kw
In this case I am not very hesitant to recreate the rpool
and reinstall th...
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
...leted and said it had fixed 20.5MB of data, I restarted the SAN. This is when all heck broke loose. I found myself in an endless kernel panic/reboot loop.
I booted with a LiveCD and deleted /etc/zfs/zpool.cache which allowed me to boot back into the system normally. I added set aok=1 and set zfs:zfs_recover=1 in /etc/system and restarted again and then proceeded with:
Any attempt to import my pool "tank", including:
zpool import -fFX tank
zpool import -fFX -o readonly=on tank
causes the same kernel panic message:
----- PANIC ----
panic[cpu2]/thread=ffffff00f8578c40:
assertion failed:...
2010 Nov 11
8
zpool import panics
...will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e backupPool_01 and it came up with this (among a
lot of other data, of course). On my other server I already tried to use
zfs:zfs_recover=1 and aok=1 in /etc/system, but that would not prevent
the kernel panic.
What else can I try? Right now I am running zdb -e -bscvL against it, as
I read somewhere that this had fixed such an issue for someone, but this
will take of course time and I don''t know, if this will lead to a...
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why?
--
This message posted from opensolaris.org