search for: whole_disk

Displaying 20 results from an estimated 33 matches for "whole_disk".

2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
...e='elatte_installcd' vdev_tree type='root' id=0 guid=7417064082496892875 children[0] type='disk' id=0 guid=16996723219710622372 path='/dev/dsk/c1d0s0' devid='id1,cmdk@AST3160812AS=____________9LS6M819/a' phys_path='/pci@0,0/pci-ide@e/ide@0/cmdk@0,0:a' whole_disk=0 metaslab_array=14 metaslab_shift=30 ashift=9 asize=158882856960 is_log=0 tank version=10 name='tank' state=0 txg=9305484 pool_guid=6165551123815947851 hostname='cempedak' vdev_tree type='root' id=0 guid=6165551123815947851 children[0] type='raidz' id=0 guid=1802975...
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2012 Jan 08
0
Pool faulted in a bad way
...path=''/dev/dsk/c12t0d0s0'' devid=''id1,sd at x001b4d23002bb800/a'' phys_path=''/pci at 0,0/pci8086,25f8 at 4/pci8086,370 at 0/pci17d3,1260 at e/disk at 0,0:a'' whole_disk=1 DTL=154 bad config type 16 for stats children[1] type=''disk'' id=1 guid=7134885674951774601 path=''/dev/dsk/c12t1d0s0''...
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
...?????????path=''/dev/dsk/c2t3d0s0'' ????????????????????????devid=''id1,sd at t49455400000000000000000001000000cf1900000e000000/a'' ????????????????????????phys_path=''/iscsi/disk at 0000iqn.2006-03.com.domain-SAN10001,0:a'' ????????????????????????whole_disk=1 ????????????????????????DTL=36 ????????????????children[1] ????????????????????????type=''disk'' ????????????????????????id=1 ????????????????????????guid=7833573669820754721 ????????????????????????path=''/dev/dsk/c2t4d0s0'' ????????????????????????devid=...
2010 May 07
0
confused about zpool import -f and export
...9;' id: 0 guid: 15556832564812580834 path: ''/dev/dsk/c0d0s0'' devid: ''id1,cmdk at AQEMU_HARDDISK=QM00001/a'' phys_path: ''/pci at 0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0:a'' whole_disk: 0 create_txg: 4 children[1]: type: ''disk'' id: 1 guid: 544113268733868414 path: ''/dev/dsk/c0d1s0'' devid: ''id1,cmdk at AQEMU_HARDDISK=QM00002/a'' phys_path...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...5 top_guid=3365726235666077346 guid=3365726235666077346 vdev_tree type=''disk'' id=0 guid=3365726235666077346 path=''/dev/dsk/c3t50002AC00039040Bd0p0'' devid=''id1,sd at n50002ac00039040b/q'' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 -------------------------------------------- LABEL 1 -------------------------------------------- version=3 name=''fpool0'' state=0 txg=4 pool_guid=10406529929620343...
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2009 Apr 08
2
ZFS data loss
...te=0 txg=2589278 pool_guid=5644167510038135831 vdev_tree type=''root'' id=0 guid=5644167510038135831 children[0] type=''mirror'' id=0 guid=14615540212911926254 whole_disk=0 metaslab_array=17 metaslab_shift=31 ashift=9 asize=292543528960 children[0] type=''disk'' id=0 guid=3588260184558145093...
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
...2318286686 hostid: 4220169081 hostname: ''mini.home'' top_guid: 18181370402585537036 guid: 18181370402585537036 vdev_tree: type: ''disk'' id: 0 guid: 18181370402585537036 path: ''/dev/da0p2'' whole_disk: 0 metaslab_array: 14 metaslab_shift: 30 ashift: 9 asize: 999856013312 DTL: 869 (LABEL 1 - 3 identical) jack at opensolaris:~# zdb -l /dev/dsk/c4t0d1s1 -------------------------------------------- LABEL 0 -------------------------------------------- vers...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...isk'' id: 0 guid: 15895240748538558983 path: ''/dev/dsk/c7t1d0s0'' devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF607MH3A3KSK/a'' phys_path: ''/pci at 0,0/pci1043,8231 at 12/disk at 1,0:a'' whole_disk: 1 metaslab_array: 23 metaslab_shift: 33 ashift: 9 asize: 1000191557632 is_log: 0 DTL: 605 -------------------------------------------- LABEL 1 -------------------------------------------- version: 22 name: ''puddle'' state...
2011 Jan 04
0
zpool import hangs system
...guid: 5932373083307643211 path: ''/dev/dsk/c0t0d0s0'' devid: ''id1,sd at SATA_____ST31500341AS________________9VS0J0GC/a'' phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 2/disk at 0,0:a'' whole_disk: 1 DTL: 27 children[1]: type: ''disk'' id: 1 guid: 13323938879160252094 path: ''/dev/dsk/c0t1d0s0'' devid: ''id1,sd at SATA_____ST31500341AS________________9VS25DAN/a''...
2011 Nov 05
4
ZFS Recovery: What do I try next?
...create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 13554115250875315903 phys_path: ''/pci at 0,0/pci1002,4391 at 11/disk at 3,0:q'' whole_disk: 0 DTL: 57 create_txg: 4 path: ''/bank3/hd/devs/loop0'' children[1]: type: ''disk'' id: 1 guid: 17894226827518944093...
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have tried a couple of the commands (zpool import -f and zpool import -FX llift) root at
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata: pool: d state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://illumos.org/msg/ZFS-8000-72 scan: none requested config: NAME STATE
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
...6 | path=''/dev/dsk/c2d1s0'' | devid=''id1,cmdk at AST3400632NS=____________5NF1EDQL/a'' | phys_path=''/pci at 0,0/pci-ide at 12/ide at 0/cmdk at 1,0:a'' | whole_disk=1 | DTL=33 | children[1] | type=''disk'' | id=1 | guid=15098521488705848306 | path=''/dev/dsk/c3d1s0'' | de...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
...name=''home.gladchenko.ru'' top_guid=5515037892630596686 guid=5515037892630596686 vdev_tree type=''disk'' id=0 guid=5515037892630596686 path=''/dev/ad4'' devid=''ad:5QM0WF9G'' whole_disk=0 metaslab_array=14 metaslab_shift=32 ashift=9 asize=500103118848 -------------------------------------------- LABEL 1 -------------------------------------------- version=6 name=''tank'' state=0 txg=4 pool_guid=1206935926872564277...
2009 Jan 15
2
zfs drive keeps failing between export and import
I have a zpool that consists for a two-drive mirror. The two times I took the zpool offline, I had to resilver one of the drives (the same drive both times) when I imported it back. All drives in the pool show no read, write, or checksum errors and are new, so I'm looking to a software problem before hardware. Both drives are encrypted geli devices. I tried to reproduce the error with 1GB
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id: