Displaying 6 results from an estimated 6 matches for "c7t1d0s0".
Did you mean:
c0t1d0s0
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...fffff000cb54ca0 devzvol_lookup+0xf8()
ffffff000cb54d20 sdev_iter_datasets+0xb0()
ffffff000cb54da0 devzvol_readdir+0xd6()
ffffff000cb54e20 fop_readdir+0xab()
ffffff000cb54ec0 getdents64+0xbc()
ffffff000cb54f10 sys_syscall32+0xff()
-- DISK---
-bash-4.0$ sudo /usr/sbin/zdb -l /dev/dsk/c7t1d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
version: 22
name: ''puddle''
state: 0
txg: 55553139
pool_guid: 13462109782214169516
hostid: 4421991
hostname: ''amd''
top_guid: 158952407...
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2009 Oct 29
2
Difficulty testing an SSD as a ZIL
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as backing stores before attaching it to my production pool. However, when I exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkfile 1g /data01/test2/1gtest
Created a new pool:
zpool create ziltest2 /data01/test2/1gtest
Added the
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have tried a couple of the commands (zpool import -f and zpool import -FX
llift)
root at
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2).
The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2012 Feb 18
6
Cannot mount encrypted filesystems.
...Disk 1 Disk 8 zpools
+--+ +--+
|p1| .. |p1| <- slice_0
+--+ +--+
|p2| .. |p2| <- slice_1
+--+ +--+
|p3| .. |p3| <- slice_2
+--+ +--+
zpool status shows:
...
NAME STATE
slice_0 ONLINE
raidz3-0 ONLINE
c7t0d0s0 ONLINE
c7t1d0s0 ONLINE
c7t2d0s0 ONLINE
c7t3d0s0 ONLINE
c7t4d0s0 ONLINE
c7t5d0s0 ONLINE
c7t6d0s0 ONLINE
c7t7d0s0 ONLINE
...
And several file systems on each pool:
zfs list shows:
rpool
...
rpool/export
rpool/export/home
rpool/export/home/user1
...
slice_0...