Displaying 20 results from an estimated 23 matches for "phys_path".
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
...disk''
????????????????????????id=0
????????????????????????guid=640233961847538260
????????????????????????path=''/dev/dsk/c2t3d0s0''
????????????????????????devid=''id1,sd at t49455400000000000000000001000000cf1900000e000000/a''
????????????????????????phys_path=''/iscsi/disk at 0000iqn.2006-03.com.domain-SAN10001,0:a''
????????????????????????whole_disk=1
????????????????????????DTL=36
????????????????children[1]
????????????????????????type=''disk''
????????????????????????id=1
????????????????????????guid=7833573669...
2010 May 07
0
confused about zpool import -f and export
...is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 15556832564812580834
path: ''/dev/dsk/c0d0s0''
devid: ''id1,cmdk at AQEMU_HARDDISK=QM00001/a''
phys_path: ''/pci at 0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0:a''
whole_disk: 0
create_txg: 4
children[1]:
type: ''disk''
id: 1
guid: 544113268733868414
path: ''/dev/dsk/c0d1s0''...
2012 Jan 08
0
Pool faulted in a bad way
...type=''disk''
id=0
guid=5644370057710608379
path=''/dev/dsk/c12t0d0s0''
devid=''id1,sd at x001b4d23002bb800/a''
phys_path=''/pci at 0,0/pci8086,25f8 at 4/pci8086,370 at 0/pci17d3,1260 at e/disk at 0,0:a''
whole_disk=1
DTL=154
bad config type 16 for stats
children[1]
type=''disk''...
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...: 15895240748538558983
vdev_children: 2
vdev_tree:
type: ''disk''
id: 0
guid: 15895240748538558983
path: ''/dev/dsk/c7t1d0s0''
devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF607MH3A3KSK/a''
phys_path: ''/pci at 0,0/pci1043,8231 at 12/disk at 1,0:a''
whole_disk: 1
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000191557632
is_log: 0
DTL: 605
--------------------------------------------
LABEL 1
----------------------...
2011 Jan 04
0
zpool import hangs system
...is_log: 0
children[0]:
type: ''disk''
id: 0
guid: 5932373083307643211
path: ''/dev/dsk/c0t0d0s0''
devid: ''id1,sd at SATA_____ST31500341AS________________9VS0J0GC/a''
phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 2/disk at 0,0:a''
whole_disk: 1
DTL: 27
children[1]:
type: ''disk''
id: 1
guid: 13323938879160252094
path: ''/dev/dsk/c0t1d0s0...
2011 Nov 05
4
ZFS Recovery: What do I try next?
...ift: 35
ashift: 9
asize: 6001161928704
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 13554115250875315903
phys_path: ''/pci at 0,0/pci1002,4391 at 11/disk at 3,0:q''
whole_disk: 0
DTL: 57
create_txg: 4
path: ''/bank3/hd/devs/loop0''
children[1]:
type: ''disk&...
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have tried a couple of the commands (zpool import -f and zpool import -FX
llift)
root at
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10
Generic_141415-08 i86pc i386 i86pc
Unfortunately the first disk with grub loader has failed with unrecoverable
block write/read errors.
Now I have the problem to import rpool after the first disk has failed.
So I decided to do: "zpool import -f rpool" only with second disk, but it''s
hangs and the system is
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
...39;
state=0
txg=13
pool_guid=7417064082496892875
hostname='elatte_installcd'
vdev_tree
type='root'
id=0
guid=7417064082496892875
children[0]
type='disk'
id=0
guid=16996723219710622372
path='/dev/dsk/c1d0s0'
devid='id1,cmdk@AST3160812AS=____________9LS6M819/a'
phys_path='/pci@0,0/pci-ide@e/ide@0/cmdk@0,0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=158882856960
is_log=0
tank
version=10
name='tank'
state=0
txg=9305484
pool_guid=6165551123815947851
hostname='cempedak'
vdev_tree
type='root'
id=0
guid=6165551123815...
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
...''disk''
| id=0
| guid=5385778296365299126
| path=''/dev/dsk/c2d1s0''
| devid=''id1,cmdk at AST3400632NS=____________5NF1EDQL/a''
| phys_path=''/pci at 0,0/pci-ide at 12/ide at 0/cmdk at 1,0:a''
| whole_disk=1
| DTL=33
| children[1]
| type=''disk''
| id=1
| guid=1509852148...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2009 Jun 29
5
zpool import issue
I''m having following issue .. i import the zpool and it shows pool imported correctly but after few seconds when i issue command zpool list .. it does not show any pool and when again i try to import it says device is missing in pool .. what could be the reason for this .. and yes this all started after i upgraded the powerpath
abcxxxx # zpool import
pool: emcpool1
id:
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi,
Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was
upgraded from 9.2-RELEASE?
I have two servers, with very different hardware (on is with soft raid
and the other have not) and after a zpool upgrade, no way to get the
server booting.
Do I miss something when upgrading?
I cannot get the error message for the moment. I reinstalled the raid
server under Linux and the other
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''
devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a''
phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 0,0:a''
whole_disk: 0
metaslab_array: 23
metaslab_shift: 28
ashift: 9
asize: 32172408832
is_log: 0
create_txg: 4
test:
version: 27
name: ''tes...
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''
devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a''
phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 0,0:a''
whole_disk: 0
metaslab_array: 23
metaslab_shift: 28
ashift: 9
asize: 32172408832
is_log: 0
create_txg: 4
test:
version: 27
name: ''tes...
2012 Jan 17
6
Failing WD desktop drive in mirror, how to identify?
I have a desktop system with 2 ZFS mirrors. One drive in one mirror is
starting to produce read errors and slowing things down dramatically. I
detached it and the system is running fine. I can''t tell which drive it is
though! The error message and format command let me know which pair the bad
drive is in, but I don''t know how to get any more info than that like the
serial number
2012 Oct 03
14
Changing rpool device paths/drivers
Hello all,
It was often asked and discussed on the list about "how to
change rpool HDDs from AHCI to IDE mode" and back, with the
modern routine involving reconfiguration of the BIOS, bootup
from separate live media, simple import and export of the
rpool, and bootup from the rpool. The documented way is to
reinstall the OS upon HW changes. Both are inconvenient to
say the least.
2010 Jul 06
3
Help with Faulted Zpool Call for Help(Cross post)
Hello list,
I posted this a few days ago on opensolaris-discuss@ list
I am posting here, because there my be too much noise on other lists
I have been without this zfs set for a week now.
My main concern at this point,is it even possible to recover this zpool.
How does the metadata work? what tool could is use to rebuild the
corrupted parts
or even find out what parts are corrupted.
most but
2010 Nov 11
8
zpool import panics
...is-backup''
vdev_tree:
type: ''root''
id: 0
guid: 15398414531935588736
children[0]:
type: ''disk''
id: 0
guid: 5041131819915543280
phys_path:
''/pci at 0,0/pci8086,3410 at 9/pci1077,138 at 0/fp at 0,0/disk at w2100001378ac0253,0:a''
whole_disk: 1
metaslab_array: 23
metaslab_shift: 38
ashift: 9
asize: 28001025916928
is_l...