similar to: unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes

Displaying 20 results from an estimated 100 matches similar to: "unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes"

2012 Jan 31
0
(gang?)block layout question, and how to decipher ZDB output?
Hello, all I''m "playing" with ZDB again on another test system, the rpool being uncompressed with 512-byte sectors. Here''s some output that puzzles me (questions follow): # zdb -dddddddd -bbbbbb rpool/ROOT/nightly-2012-01-31 260050 ... 1e80000 L0 DVA[0]=<0:200972e00:20200> DVA[1]=<0:391820a00:200> [L0 ZFS plain file] fletcher4 uncompressed
2012 Jan 17
0
ZDB returning strange values
Hello all, I have a question about what output "ZDB -dddddd" should produce in L0 DVA fields. I expected there to be one or more same-sized references to data blocks stored in top-level vdevs (one vdev #0 in my 6-disk raidz2 pool), as confirmed by the source: http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/zdb/zdb.c#sprintf_blkptr_compact And I do see that for some of my
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2008 Aug 05
0
mdb & zdb should print info about crypt in blkptr
Author: Darren Moffat <Darren.Moffat at Sun.COM> Repository: /hg/zfs-crypto/gate Latest revision: 7a6ad1928ffa250a595fe19b5eef1923cf2a4c67 Total changesets: 1 Log message: mdb & zdb should print info about crypt in blkptr Files: update: usr/src/cmd/mdb/common/modules/zfs/zfs.c update: usr/src/cmd/zdb/zdb.c
2006 Oct 31
0
6366222 zdb(1M) needs to use largefile primitives when reading label
Author: eschrock Repository: /hg/zfs-crypto/gate Revision: e5f70a6fc5010aa205f244a25a9cdb950e0dae89 Log message: 6366222 zdb(1M) needs to use largefile primitives when reading label 6366267 zfs broke non-BUILD64 compilation of libsec Files: update: usr/src/cmd/zdb/zdb.c update: usr/src/lib/libsec/Makefile
2009 Aug 02
2
zdb assertion failure/zpool recovery
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I have a corrupt pool, which lives on a .vdi file of a VirtualBox. IIRC the corruption (i.e. pool being not importable) was caused when I killed virtual box, because it was hung. This pool consists of a single vdev and I would really like to get some files out of that thing. So I tried running zdb, but this fails with an assertion failure:
2008 May 04
2
Inconcistancies with scrub and zdb
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other pool is another story. First lesson (I think) is you should scrub your pools, at least those backed by
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2007 Nov 14
0
space_map.c ''ss == NULL'' panic strikes back.
Hi. Someone currently reported a ''ss == NULL'' panic in space_map.c/space_map_add() on FreeBSD''s version of ZFS. I found that this problem was previously reported on Solaris and is already fixed. I verified it and FreeBSD''s version have this fix in place...
2007 Sep 21
1
Is it solve.QP or is it me?
Hi. Here are three successive examples of simple quadratic programming problems with the same structure. Each problem has 2*N variables, and should have a solution of the form (1/N,0,1/N,0,...,1/N,0). In these cases, N=4,5,6. As you will see, the N=4 and 6 cases give the expected solution, but the N=5 case breaks down. >cm8 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [1,] 1 0
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend. I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2007 Nov 09
3
Major problem with a new ZFS setup
We recently installed a 24 disk SATA array with an LSI controller attached to a box running Solaris X86 10 Release 4. The drives were set up in one big pool with raidz, and it worked great for about a month. On the 4th, we had the system kernel panic and crash, and it''s now behaving very badly. Here''s what diagnostic data I''ve been able to collect so far: In the
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2006 Jul 20
1
tracking an error back to a file
Hi. I''m in the process of writing an introductory paper on ZFS. The paper is meant to be something that could be given to a systems admin at a site to introduce ZFS and document common procedures for using ZFS. In the paper, I want to document the method for identifying which file has a checksum error. In previous discussions on this alias, I''ve used the following
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2008 Jun 20
1
zfs corruption...
Hi all, It would appear that I have a zpool corruption issue to deal with... pool is exported, but upon trying to import it, server panics.  Are there any tools available on a zpool that is in an exported state?  I''ve got a separate test bed in which I''m trying to recreate, but I keep getting messages to the effect of need to import the pool first.  Suggestions? thanks Jay
2011 Jul 13
3
adding text to spplot
hi all, I have a plot to which i would like to add text labels. And i cant find a way...here is the code : enaD2<-idw(D2~1, loca=dva, newdata=grd) pts = list("sp.points", dva, pch = 20, cex=1.5, col = "darkred spplot(enaD2, "var1.pred",sp.layout=pts, main = "globina 60 cm", sub="D2",
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi, I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is: #!/bin/sh -x uname -a mkfile 100m /data zpool create tank /data zpool status cd /tank ls -al cp /etc/services . ls -al cd / rm /data zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at