search for: metaslab_array

Displaying 20 results from an estimated 33 matches for "metaslab_array".

2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
..._installcd' vdev_tree type='root' id=0 guid=7417064082496892875 children[0] type='disk' id=0 guid=16996723219710622372 path='/dev/dsk/c1d0s0' devid='id1,cmdk@AST3160812AS=____________9LS6M819/a' phys_path='/pci@0,0/pci-ide@e/ide@0/cmdk@0,0:a' whole_disk=0 metaslab_array=14 metaslab_shift=30 ashift=9 asize=158882856960 is_log=0 tank version=10 name='tank' state=0 txg=9305484 pool_guid=6165551123815947851 hostname='cempedak' vdev_tree type='root' id=0 guid=6165551123815947851 children[0] type='raidz' id=0 guid=18029757455913565148 npa...
2007 Sep 18
5
ZFS panic in space_map.c line 125
...6235666077346 guid=3365726235666077346 vdev_tree type=''disk'' id=0 guid=3365726235666077346 path=''/dev/dsk/c3t50002AC00039040Bd0p0'' devid=''id1,sd at n50002ac00039040b/q'' whole_disk=0 metaslab_array=13 metaslab_shift=31 ashift=9 asize=322117566464 -------------------------------------------- LABEL 1 -------------------------------------------- version=3 name=''fpool0'' state=0 txg=4 pool_guid=10406529929620343615 top_guid=33657262...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...id: 0 guid: 15895240748538558983 path: ''/dev/dsk/c7t1d0s0'' devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF607MH3A3KSK/a'' phys_path: ''/pci at 0,0/pci1043,8231 at 12/disk at 1,0:a'' whole_disk: 1 metaslab_array: 23 metaslab_shift: 33 ashift: 9 asize: 1000191557632 is_log: 0 DTL: 605 -------------------------------------------- LABEL 1 -------------------------------------------- version: 22 name: ''puddle'' state: 0 txg: 55553139...
2010 May 07
0
confused about zpool import -f and export
...pool_guid: 5607125904664422185 hostid: 4905600 hostname: ''nexenta_safemode'' top_guid: 7124011680357776878 guid: 15556832564812580834 vdev_children: 1 vdev_tree: type: ''mirror'' id: 0 guid: 7124011680357776878 metaslab_array: 23 metaslab_shift: 32 ashift: 9 asize: 750041956352 is_log: 0 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 15556832564812580834 path: ''/dev/dsk/c0d0s0''...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
...4220169081 hostname: ''mini.home'' top_guid: 18181370402585537036 guid: 18181370402585537036 vdev_tree: type: ''disk'' id: 0 guid: 18181370402585537036 path: ''/dev/da0p2'' whole_disk: 0 metaslab_array: 14 metaslab_shift: 30 ashift: 9 asize: 999856013312 DTL: 869 (LABEL 1 - 3 identical) jack at opensolaris:~# zdb -l /dev/dsk/c4t0d1s1 -------------------------------------------- LABEL 0 -------------------------------------------- version: 6 name: ''...
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
...ladchenko.ru'' top_guid=5515037892630596686 guid=5515037892630596686 vdev_tree type=''disk'' id=0 guid=5515037892630596686 path=''/dev/ad4'' devid=''ad:5QM0WF9G'' whole_disk=0 metaslab_array=14 metaslab_shift=32 ashift=9 asize=500103118848 -------------------------------------------- LABEL 1 -------------------------------------------- version=6 name=''tank'' state=0 txg=4 pool_guid=12069359268725642778 hostid=2719189110...
2012 Jan 08
0
Pool faulted in a bad way
...vdev_tree type=''root'' id=0 guid=17315487329998392945 bad config type 16 for stats children[0] type=''raidz'' id=0 guid=14250359679717261360 nparity=2 metaslab_array=24 metaslab_shift=37 ashift=9 asize=14002698321920 is_log=0 root at storage:~# zdb tank version=14 name=''tank'' state=0 txg=0 pool_guid=17315487...
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
...eserver011'' ????vdev_tree ????????type=''root'' ????????id=0 ????????guid=14464037545511218493 ????????children[0] ????????????????type=''raidz'' ????????????????id=0 ????????????????guid=179558698360846845 ????????????????nparity=1 ????????????????metaslab_array=13 ????????????????metaslab_shift=37 ????????????????ashift=9 ????????????????asize=20914156863488 ????????????????is_log=0 ????????????????children[0] ????????????????????????type=''disk'' ????????????????????????id=0 ????????????????????????guid=640233961847538260 ???????...
2009 Apr 08
2
ZFS data loss
..._guid=5644167510038135831 vdev_tree type=''root'' id=0 guid=5644167510038135831 children[0] type=''mirror'' id=0 guid=14615540212911926254 whole_disk=0 metaslab_array=17 metaslab_shift=31 ashift=9 asize=292543528960 children[0] type=''disk'' id=0 guid=3588260184558145093 path=''/de...
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath abcxxxx # zpool import pool: emcpool1 id:
2008 Dec 15
15
Need Help Invalidating Uberblock
...;'zones'' state=0 txg=4 pool_guid=17407806223688303760 top_guid=11404342918099082864 guid=11404342918099082864 vdev_tree type=''file'' id=0 guid=11404342918099082864 path=''/opt/zpool.zones'' metaslab_array=14 metaslab_shift=28 ashift=9 asize=42944954368 -------------------------------------------- LABEL 1 -------------------------------------------- version=4 name=''zones'' state=0 txg=4 pool_guid=17407806223688303760 top_guid=1140434291...
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...14826633751084073618 path: ''/dev/dsk/c5t0d0s0'' devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a'' phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 0,0:a'' whole_disk: 0 metaslab_array: 23 metaslab_shift: 28 ashift: 9 asize: 32172408832 is_log: 0 create_txg: 4 test: version: 27 name: ''test'' state: 0 txg: 26 pool_guid: 13455895622924169480 hostid: 8413798 hostname: ''w...
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...14826633751084073618 path: ''/dev/dsk/c5t0d0s0'' devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a'' phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 0,0:a'' whole_disk: 0 metaslab_array: 23 metaslab_shift: 28 ashift: 9 asize: 32172408832 is_log: 0 create_txg: 4 test: version: 27 name: ''test'' state: 0 txg: 26 pool_guid: 13455895622924169480 hostid: 8413798 hostname: ''w...
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2010 May 01
5
Single-disk pool corrupted after controller failure
...hostid: 2563111091 hostname: '''' top_guid: 1987270273092463401 guid: 1987270273092463401 vdev_tree: type: ''disk'' id: 0 guid: 1987270273092463401 path: ''/dev/ad6s1d'' whole_disk: 0 metaslab_array: 23 metaslab_shift: 32 ashift: 9 asize: 497955373056 is_log: 0 DTL: 111 -------------------------------------------- LABEL 3 -------------------------------------------- version: 14 name: ''tank'' state: 0 txg: 11420324 poo...
2009 Aug 05
0
zfs export and import between diferent controllers
...47 hostid=2302370682 hostname=''xxxxxxxxxx'' top_guid=2004285697880137437 guid=2004285697880137437 vdev_tree type=''disk'' id=0 guid=2004285697880137437 path=''/dev/da2'' whole_disk=0 metaslab_array=14 metaslab_shift=32 ashift=9 asize=749984022528 Are there any way how to learn ZFS that drive name has changed? -- This message posted from opensolaris.org