similar to: Can''t access my data

Displaying 20 results from an estimated 4000 matches similar to: "Can''t access my data"

2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for
2008 Dec 18
3
automatic forced zpool import with unmatched hostid
Hi, since hostid is stored in the label, "zpool import" failed if the hostid dind''t match. Under certain circonstances (ldom failover) it means you have to manually force the zpool import while booting. With more than 80 LDOMs on a single host it will be great if we could configure the machine back to the old behavior where it didn''t failed, maybe with a /etc/sytem
2008 May 18
2
possible zfs bug? lost all pools
after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 0xbefb4a0f). See: http://www.sun.com/msg/ZFS-8000-EY any zpool cmd returned nothing else as not existing zfs, seems the zfs info on disks
2006 Sep 13
16
Comments on a ZFS multiple use of a pool, RFE.
I filed this RFE earlier, since there is no way for non sun personel to see this RFE for a while I am posting it here, and asking for feedback from the community. [Fwd: CR 6470231 Created P5 opensolaris/triage-queue Add an inuse check that is inforced even if import -f is used.] Inbox Assign a GTD Label to this Conversation: [Show] Statuses: Next Action, Action, Waiting On, SomeDay, Finished
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What would you do next to try and recover this zfs pool? I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was composed of 4 1.5 TiB disks. One disk is totally dead. Another had SMART errors, but using GNU ddrescue I was able to copy all the data off successfully. I have copied all 3 remaining disks as images using
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi, After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer revision, all arrays I have using ZFS mirroring are displaying errors. This started happening immediately after ZFS upgrades. Here is an example: ormandj at neutron.corenode.com:~$ zpool status pool: rpool state: DEGRADED status: One or more devices has experienced an unrecoverable error. An attempt was
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2007 Feb 10
16
How to backup a slice ? - newbie
... though I tried, read and typed the last 4 hours; still no clue. Please, can anyone give a clear idea on how this works: Get the content of c0d1s1 to c0d0s7 ? c0d1s1 is pool home and active; c0d0s7 is not active. I have followed the suggestion on http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf % sudo zfs snapshot home at backup % zfs list NAME USED AVAIL REFER
2009 Jan 15
2
zfs drive keeps failing between export and import
I have a zpool that consists for a two-drive mirror. The two times I took the zpool offline, I had to resilver one of the drives (the same drive both times) when I imported it back. All drives in the pool show no read, write, or checksum errors and are new, so I'm looking to a software problem before hardware. Both drives are encrypted geli devices. I tried to reproduce the error with 1GB
2009 Jul 01
14
can''t boot 2009.06 domU on Xen 3.4.1 / CentOS 5.3 dom0
I''ve got a CentOS 5.3 dom0 with Xen 3.4.1-rc5 (or so). I''ve tried the same stuff below with 3.4.0, no difference. I''m trying to install 2009.06 PV domU based on instructions from [1] and [2]. I can run the install fine, I can also get the kernel and boot archive (from [2]) after the install. But for the life of me I can''t get the installed domU to boot. If I
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi, I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help. Now it looks like that: # zpool status pool: tank state: UNAVAIL status:
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2007 Apr 10
3
Renaming a pool?
Hi all, I have a pool called tank/home/foo and I want to rename it to tank/home/bar. What''s the best way to do this (the zfs and zpool man pages don''t have a "rename" option)? One way I can think of is to create a clone of tank/home/foo called tank/home/bar, and then destroy the former. Is that the best (or even only) way? TIA, -- Rich Teer, SCSA, SCNA, SCSECA,
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool