similar to: invalid vdev configuration after power failure

Displaying 20 results from an estimated 2000 matches similar to: "invalid vdev configuration after power failure"

2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la: brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts also if i set zfs set mountpoint=legacy dataset and then i mount the dataset to other location before the directory tree was only : dataset - vdisk.raw The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset. However i
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2010 Jun 02
11
ZFS recovery tools
Hi, I have just recovered from a ZFS crash. During the antagonizing time this took, I was surprised to learn how undocumented the tools and options for ZFS recovery we''re. I managed to recover thanks to some great forum posts from Victor Latushkin, however without his posts I would still be crying at night... I think the worst example is the zdb man page, which all it does is to ask you
2009 Jun 30
21
ZFS, power failures, and UPSes
Hello, I''ve looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply dying, etc.) Does it consistently gracefully recover? Should having a UPS be considered a (strong) recommendation or
2006 Sep 22
1
Linux Dom0 <-> Solaris prepared Volume
Hi all heve been trying (in vain) to get a Solaris b44 DomU (dowloaded from Sun) running on a Linux Xenhost I followed exactly how, and it looked ok when it starts booting... But it never boots . adapted the configfile to boot with -v (that I can see at least something) and this is what I get ===SNIP=== root@Xen-VT02:/export/xc/xvm/solaris-b44# xm create solaris-b44-64.py -c Using config
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID SUNW-MSG-ID Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2009 Oct 31
1
Kernel panic on zfs import
Hi, I''ve got an OpenSolaris 2009.06 box that will reliably panic whenever I try to import one of my pools. What''s the best practice for recovering (before I resort to nuking the pool and restoring from backup)? There are two pools on the system: rpool and tank. The rpool seems to be fine, since I can boot from a 2009.06 CD and ''zpool import -f rpool''; I can
2008 Mar 17
4
VNC authentication failures to windows HVM
I''ve set xend service properties as follows: config/vnc-listen astring \''0.0.0.0\'' config/vncpasswd astring \''vnc\'' config/default-nic astring ''nge1'' but I get an authentication error whenever I try to connect to the console with ''vncviewer :0'' - I''m on the xvm host itself. other relevant data: SunOS
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2002 Sep 23
2
can wine be run as daemon
I noticed that when i launched WINWORD.EXE the first time it took about 20 seconds to load, but when i ran it on the 2nd time it took about 10 seconds only!, it seems that it is now cached in the memory...I was wodering if it is possibe to load wine as a daemon so that it will be faster to load? sort of like a daemon? does my analogy here make sense? I would appreciate any help, comments,
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error: zpool status -v pool: local state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested
2009 Nov 02
24
dedupe is in
Deduplication was committed last night by Mr. Bonwick: > Log message: > PSARC 2009/571 ZFS Deduplication Properties > 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.
2007 Sep 13
2
zpool versioning
Hi, I was wondering if anyone would know if this is just an accounting-type error with the recorded "version=" stored on disk, or if there are/could-be any deeper issues with an "upgraded" zpool? I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correctly reported the pool as a "version=3" pool. I reinstalled the OS with a u4 (08/07), ran zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool