Displaying 11 results from an estimated 11 matches for "create_txg".
Did you mean:
create_tlv
2010 May 07
0
confused about zpool import -f and export
...24011680357776878
guid: 15556832564812580834
vdev_children: 1
vdev_tree:
type: ''mirror''
id: 0
guid: 7124011680357776878
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 750041956352
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 15556832564812580834
path: ''/dev/dsk/c0d0s0''
devid: ''id1,cmdk at AQEMU_HARDDISK=QM00001/a''
phys_path: ''/pci at 0,0/...
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2011 Nov 05
4
ZFS Recovery: What do I try next?
...type: ''raidz''
id: 0
guid: 10967243523656644777
nparity: 1
metaslab_array: 23
metaslab_shift: 35
ashift: 9
asize: 6001161928704
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 13554115250875315903
phys_path: ''/pci at 0,0/pci1002,4391 at 11/disk at 3,0:q''
whole_disk: 0...
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have tried a couple of the commands (zpool import -f and zpool import -FX
llift)
root at
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...ion: 22
name: ''rpool''
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: ''weston''
vdev_children: 1
vdev_tree:
type: ''root''
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''
devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a''
phys_path:...
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
...ion: 22
name: ''rpool''
state: 0
txg: 7254
pool_guid: 17616386148370290153
hostid: 8413798
hostname: ''weston''
vdev_children: 1
vdev_tree:
type: ''root''
id: 0
guid: 17616386148370290153
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 14826633751084073618
path: ''/dev/dsk/c5t0d0s0''
devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a''
phys_path:...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi,
Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was
upgraded from 9.2-RELEASE?
I have two servers, with very different hardware (on is with soft raid
and the other have not) and after a zpool upgrade, no way to get the
server booting.
Do I miss something when upgrading?
I cannot get the error message for the moment. I reinstalled the raid
server under Linux and the other
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
...''/dev/da0p2''
whole_disk: 0
DTL: 869
children[1]:
type: ''disk''
id: 1
guid: 17772452695039664796
path: ''/dev/da2p2''
whole_disk: 0
DTL: 868
create_txg: 0
(LABEL 1 - 3 identical)
Thanks,
Norbert
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2010 Nov 11
8
zpool import panics
...7,138 at 0/fp at 0,0/disk at w2100001378ac0253,0:a''
whole_disk: 1
metaslab_array: 23
metaslab_shift: 38
ashift: 9
asize: 28001025916928
is_log: 0
DTL: 261
create_txg: 4
path: ''/dev/dsk/c3t2100001378AC0253d0s0''
devid: ''id1,sd at n202a001378ac0271/a''
children[1]:
type: ''disk''
id: 1
guid: 4110130254866694272...