similar to: ZFS Recovery after SAN Corruption

Displaying 20 results from an estimated 10000 matches similar to: "ZFS Recovery after SAN Corruption"

2008 Jun 20
1
zfs corruption...
Hi all, It would appear that I have a zpool corruption issue to deal with... pool is exported, but upon trying to import it, server panics.  Are there any tools available on a zpool that is in an exported state?  I''ve got a separate test bed in which I''m trying to recreate, but I keep getting messages to the effect of need to import the pool first.  Suggestions? thanks Jay
2011 Apr 24
2
zfs problem vdev I/O failure
Good morning, I have a problem with ZFS: ZFS filesystem version 4 ZFS storage pool version 15 Yesterday my comp with Freebsd 8.2 releng shutdown with ad4 error detached,when I copy a big file... and after reboot in 2 wd green 1tb say me goodbye. One of them die and other with zfs errors: Apr 24 04:53:41 Flash root: ZFS: vdev I/O failure, zpool=zroot path= offset=187921768448 size=512 error=6
2006 Dec 12
3
ZFS Corruption
Please reply directly to me. Seeing the message below. Is it possible to determine exactly which file is corrupted? I was thinking the OBJECT/RANGE info may be pointing to it but I don''t know how to equate that to a file. # zpool status -v pool: u01 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2006 May 16
8
ZFS recovery from a disk losing power
running b37 on amd64. after removing power from a disk configured as a mirror, 10 minutes has passed and ZFS has still not offlined it. # zpool status tank pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2009 Nov 02
0
Kernel panic on zfs import (hardware failure)
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote: > Donald Murray, P.Eng. wrote: >> >> Hi, >> >> I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever >> I try to import one of my pools. What''s the best practice for >> recovering (before I resort to nuking the pool and
2007 Feb 11
0
unable to mount legacy vol - panic in zfs:space_map_remove - zdb crashes
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked the system this morning. system was running S10U2. In the course of troubleshooting I''ve installed the latest recommended bundle including kjp 118833-36 and zfs patch 124204-03 created as: zpool create zfspool01 /dev/dsk/emcpower0c zfs create zfspool01/nb60openv zfs set mountpoint=legacy zfspool01/nb60openv
2007 Jul 18
1
Converting exisitng ZFS pool to MPxIO
We have a Sun v890, and I''m interested converting existing ZFS zpool from c#t#d# to MPxIO. % zpool status pool: data state: ONLINE status: ONLINE scrub: scrub completed with 0 errors on Sun Jul 15 10:58:33 2007 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0
2010 Jun 30
1
zfs rpool corrupt?????
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
I''m seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of "zpool status" shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn''t expected behaviour is it? When I create a mirrored volume in ZFS everything is as one would expect the pool is the size of a single drive. My setup: Compaq
2008 Aug 06
0
zfs status -v tries too hard?
After some errors were logged as to a problem with a ZFS file system, I ran zfs status followed by zfs status -v... # zpool status pool: ehome state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see:
2009 Jul 05
0
Solaris ZFS native API publicly available??
Hi All, I would like to know whether the ZFS native API for SunOS (http://www.opensolaris.org/os/community/zfs/source/) is publicly available now? I see in some old mailing lists (2 years old) that they were not publicly available. Is this still true? Also I see there is a java API available at https://zfs.dev.java.net/apidocs/org/jvnet/solaris/libzfs/LibZFS.html &
2009 Aug 02
1
zpool status showing wrong device name (similar to: ZFS confused about disk controller )
Hi All, over the last couple of weeks, I had to boot from my rpool from various physical machines because some component on my laptop mainboard blew up (you know that burned electronics smell?). I can''t retrospectively document all I did, but I am sure I recreated the boot-archive, ran devfsadm -C and deleted /etc/zfs/zpool.cache several times. Now zpool status is referring to a
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2009 Jan 05
3
ZFS import on pool with same name?
I have an OpenSolaris snv_101 box with ZFS on it. (Sun Ultra 20 M2) zpool name is rpool. The I have a 2nd hard drive in the box that I am trying to recover the ZFS data from (long story but that HD became unbootable after installing IPS on the machine) Both drives have a pool named "rpool", so I can''t import the rpool from the 2nd drive. root at hyperion:~# zpool status
2010 May 05
0
zfs destroy -f and dataset is busy?
We have a pair of opensolaris systems running snv_124. Our main zpool ''z'' is running ZFS pool version 18. Problem: #zfs destroy -f z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00 cannot destroy ''z/Users/harrison at zfs-auto-snap:daily-2010-04-09-00:00'': dataset is busy I have tried: Unable to destroy numerous datasets even with a -f option.
2009 Nov 04
1
ZFS non-zero checksum and permanent error with deleted file
Hello, I am actually using ZFS under FreeBSD, but maybe someone over here can help me anyway. I''d like some advice if I still can rely on one of my ZFS pools: [user at host ~]$ sudo zpool clear zpool01 ... [user at host ~]$ sudo zpool scrub zpool01 ... [user at host ~]$ sudo zpool status -v zpool01 pool: zpool01 state: ONLINE status: One or more devices has experienced an
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi, I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is: #!/bin/sh -x uname -a mkfile 100m /data zpool create tank /data zpool status cd /tank ls -al cp /etc/services . ls -al cd / rm /data zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the