Displaying 20 results from an estimated 500 matches similar to: "Recovering from kernel panic / reboot cycle importing pool."
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf -g bar failover -establish).
ZFS went ''bam'' and triggered a Panic:
WARNING: /pci at
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend.
I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2012 Jan 31
0
(gang?)block layout question, and how to decipher ZDB output?
Hello, all
I''m "playing" with ZDB again on another test system,
the rpool being uncompressed with 512-byte sectors.
Here''s some output that puzzles me (questions follow):
# zdb -dddddddd -bbbbbb rpool/ROOT/nightly-2012-01-31 260050
...
1e80000 L0 DVA[0]=<0:200972e00:20200>
DVA[1]=<0:391820a00:200> [L0 ZFS plain file] fletcher4 uncompressed
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi,
I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is:
#!/bin/sh -x
uname -a
mkfile 100m /data
zpool create tank /data
zpool status
cd /tank
ls -al
cp /etc/services .
ls -al
cd /
rm /data
zpool status
# uncomment the following lines if you want to see the system think
# it can still read and write to the
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached
to a box running Solaris X86 10 Release 4. The drives were set up in one
big pool with raidz, and it worked great for about a month. On the 4th, we
had the system kernel panic and crash, and it''s now behaving very badly.
Here''s what diagnostic data I''ve been able to collect so far:
In the
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked the system this morning. system was running S10U2. In the course of troubleshooting I''ve installed the latest recommended bundle including kjp 118833-36 and zfs patch 124204-03
created as:
zpool create zfspool01 /dev/dsk/emcpower0c
zfs create zfspool01/nb60openv
zfs set mountpoint=legacy zfspool01/nb60openv
2006 Jul 20
1
tracking an error back to a file
Hi. I''m in the process of writing an introductory paper on ZFS.
The paper is meant to be something that could be given to a systems
admin at a site to introduce ZFS and document common procedures for
using ZFS.
In the paper, I want to document the method for identifying which
file has a checksum error. In previous discussions on this alias,
I''ve used the following
2008 Jun 20
1
zfs corruption...
Hi all,
It would appear that I have a zpool corruption issue to deal with...
pool is exported, but upon trying to import it, server panics. Are
there any tools available on a zpool that is in an exported state?
I''ve got a separate test bed in which I''m trying to recreate, but I
keep getting messages to the effect of need to import the pool first.
Suggestions?
thanks
Jay
2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la:
brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts
also if i set
zfs set mountpoint=legacy dataset
and then i mount the dataset to other location
before the directory tree was only :
dataset
- vdisk.raw
The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset.
However i
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2012 Jan 17
0
ZDB returning strange values
Hello all, I have a question about what output "ZDB -dddddd" should
produce in L0 DVA fields. I expected there to be one or more
same-sized references to data blocks stored in top-level vdevs
(one vdev #0 in my 6-disk raidz2 pool), as confirmed by the source:
http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/zdb/zdb.c#sprintf_blkptr_compact
And I do see that for some of my
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz).
I then re-installed solaris express b28, preserving the zfs filesystems.
When I tried to "zpool import" my zfs filesystems I got a kernel panic:
> debugging crash dump vmcore.0 (32-bit) from blackbird
> operating system: 5.11 snv_28 (i86pc)
> panic message:
> ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2012 Dec 20
3
Pool performance when nearly full
Hi
I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking
(and I''d check the ZFS wikis but the websites are down at the moment).
Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ?
zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%
zpool iostat:
used
2010 Jan 30
3
Checksum fletcher4 or sha256 ?
Hi,
I''m atmost ready to deploy my new homeserver for final testing.
Before I want to be sure that nothing big is left untouched.
Reading ZFS Admin Guide About the checksum method, there''s no advice about it.
The default is fletcher4. there''s also SHA256
Now the sha256 is pretty ''heavy'' to calculate, so I think that it''s left out because can
2007 Sep 21
1
Is it solve.QP or is it me?
Hi.
Here are three successive examples of simple quadratic programming problems
with the same structure. Each problem has 2*N variables, and should have a
solution of the form (1/N,0,1/N,0,...,1/N,0). In these cases, N=4,5,6. As
you will see, the N=4 and 6 cases give the expected solution, but the N=5
case breaks down.
>cm8
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 0
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message:
Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007
Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/