Displaying 20 results from an estimated 32 matches for "vdev_tree".
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
...0 0
c2d1 ONLINE 0 0 0
c3d0 ONLINE 0 0 0
c3d1 ONLINE 0 0 0
root@cempedak:/dev/rdsk#
root@cempedak:/dev/rdsk# zdb -vvv
syspool
version=10
name='syspool'
state=0
txg=13
pool_guid=7417064082496892875
hostname='elatte_installcd'
vdev_tree
type='root'
id=0
guid=7417064082496892875
children[0]
type='disk'
id=0
guid=16996723219710622372
path='/dev/dsk/c1d0s0'
devid='id1,cmdk@AST3160812AS=____________9LS6M819/a'
phys_path='/pci@0,0/pci-ide@e/ide@0/cmdk@0,0:a'
whole_disk=0
metaslab_array=14
metasla...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...rdsk/c3t50002AC00039040Bd0p0
--------------------------------------------
LABEL 0
--------------------------------------------
version=3
name=''fpool0''
state=0
txg=4
pool_guid=10406529929620343615
top_guid=3365726235666077346
guid=3365726235666077346
vdev_tree
type=''disk''
id=0
guid=3365726235666077346
path=''/dev/dsk/c3t50002AC00039040Bd0p0''
devid=''id1,sd at n50002ac00039040b/q''
whole_disk=0
metaslab_array=13
metaslab_shift=31
ashi...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...------------------------------
version: 22
name: ''puddle''
state: 0
txg: 55553139
pool_guid: 13462109782214169516
hostid: 4421991
hostname: ''amd''
top_guid: 15895240748538558983
guid: 15895240748538558983
vdev_children: 2
vdev_tree:
type: ''disk''
id: 0
guid: 15895240748538558983
path: ''/dev/dsk/c7t1d0s0''
devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF607MH3A3KSK/a''
phys_path: ''/pci at 0,0/pci1043,8231 at 12/disk...
2010 May 07
0
confused about zpool import -f and export
...-----------------------
version: 22
name: ''syspool''
state: 1
txg: 384
pool_guid: 5607125904664422185
hostid: 4905600
hostname: ''nexenta_safemode''
top_guid: 7124011680357776878
guid: 15556832564812580834
vdev_children: 1
vdev_tree:
type: ''mirror''
id: 0
guid: 7124011680357776878
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 750041956352
is_log: 0
create_txg: 4
children[0]:
type: ''disk''...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
...0
--------------------------------------------
version: 6
name: ''Media''
state: 1
txg: 262869
pool_guid: 6503452912318286686
hostid: 4220169081
hostname: ''mini.home''
top_guid: 18181370402585537036
guid: 18181370402585537036
vdev_tree:
type: ''disk''
id: 0
guid: 18181370402585537036
path: ''/dev/da0p2''
whole_disk: 0
metaslab_array: 14
metaslab_shift: 30
ashift: 9
asize: 999856013312
DTL: 869
(LABEL 1 - 3 identical)
j...
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
...-
LABEL 0
--------------------------------------------
version=6
name=''tank''
state=0
txg=4
pool_guid=12069359268725642778
hostid=2719189110
hostname=''home.gladchenko.ru''
top_guid=5515037892630596686
guid=5515037892630596686
vdev_tree
type=''disk''
id=0
guid=5515037892630596686
path=''/dev/ad4''
devid=''ad:5QM0WF9G''
whole_disk=0
metaslab_array=14
metaslab_shift=32
ashift=9
asize=500103118848
----------...
2012 Jan 08
0
Pool faulted in a bad way
...some output from zdb:
# zdb tank | more
zdb: can''t open tank: I/O error
version=14
name=''tank''
state=0
txg=0
pool_guid=17315487329998392945
hostid=8783846
hostname=''storage''
vdev_tree
type=''root''
id=0
guid=17315487329998392945
bad config type 16 for stats
children[0]
type=''raidz''
id=0
guid=14250359679717261360
nparity=2
metaslab_arra...
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
...e run zpool upgrade, it tells us all pools are upgraded to latest version.
below the zdb output:
zdb stor
????version=3
????name=''stor''
????state=0
????txg=6559447
????pool_guid=14464037545511218493
????hostid=341941495
????hostname=''fileserver011''
????vdev_tree
????????type=''root''
????????id=0
????????guid=14464037545511218493
????????children[0]
????????????????type=''raidz''
????????????????id=0
????????????????guid=179558698360846845
????????????????nparity=1
????????????????metaslab_array=13
????????????????...
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath
abcxxxx # zpool import
pool: emcpool1
id:
2008 Dec 15
15
Need Help Invalidating Uberblock
...pt$ zdb -U -lv zpool.zones
--------------------------------------------
LABEL 0
--------------------------------------------
version=4
name=''zones''
state=0
txg=4
pool_guid=17407806223688303760
top_guid=11404342918099082864
guid=11404342918099082864
vdev_tree
type=''file''
id=0
guid=11404342918099082864
path=''/opt/zpool.zones''
metaslab_array=14
metaslab_shift=28
ashift=9
asize=42944954368
--------------------------------------------
LABEL 1
----------------...
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi,
Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was
upgraded from 9.2-RELEASE?
I have two servers, with very different hardware (on is with soft raid
and the other have not) and after a zpool upgrade, no way to get the
server booting.
Do I miss something when upgrading?
I cannot get the error message for the moment. I reinstalled the raid
server under Linux and the other
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...can''t open on the host bob?
Thank you in advance,
-Chris
chris at weston:~# zdb
rpool:
version: 22
name: ''rpool''
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: ''weston''
vdev_children: 1
vdev_tree:
type: ''root''
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''...
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...can''t open on the host bob?
Thank you in advance,
-Chris
chris at weston:~# zdb
rpool:
version: 22
name: ''rpool''
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: ''weston''
vdev_children: 1
vdev_tree:
type: ''root''
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''...
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2010 May 01
5
Single-disk pool corrupted after controller failure
...---
LABEL 2
--------------------------------------------
version: 14
name: ''tank''
state: 0
txg: 11420324
pool_guid: 6157028625215863355
hostid: 2563111091
hostname: ''''
top_guid: 1987270273092463401
guid: 1987270273092463401
vdev_tree:
type: ''disk''
id: 0
guid: 1987270273092463401
path: ''/dev/ad6s1d''
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 497955373056
is_log: 0
DTL: 111
--------...
2009 Aug 05
0
zfs export and import between diferent controllers
...----
LABEL 0
--------------------------------------------
version=6
name=''storage750''
state=1
txg=8
pool_guid=1304450798920256547
hostid=2302370682
hostname=''xxxxxxxxxx''
top_guid=2004285697880137437
guid=2004285697880137437
vdev_tree
type=''disk''
id=0
guid=2004285697880137437
path=''/dev/da2''
whole_disk=0
metaslab_array=14
metaslab_shift=32
ashift=9
asize=749984022528
Are there any way how to learn ZFS that drive name has...