Displaying 12 results from an estimated 12 matches for "nparity".
Did you mean:
parity
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
...ray=14
metaslab_shift=30
ashift=9
asize=158882856960
is_log=0
tank
version=10
name='tank'
state=0
txg=9305484
pool_guid=6165551123815947851
hostname='cempedak'
vdev_tree
type='root'
id=0
guid=6165551123815947851
children[0]
type='raidz'
id=0
guid=18029757455913565148
nparity=1
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1280228458496
is_log=0
children[0]
type='disk'
id=0
guid=14740261559114907785
path='/dev/dsk/c2d0s0'
devid='id1,cmdk@AST3320620AS=____________3QF0WLTV/a'
phys_path='/pci@0,0/pci10de,26f@10/pci-ide@8/ide@0/cmdk@0,0:a...
2011 Jan 29
19
multiple disk failure
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the box
rebooted. After it rebooted, the entire pool is gone and in the state
below. I had only written a few
2012 Jan 08
0
Pool faulted in a bad way
...39;'storage''
vdev_tree
type=''root''
id=0
guid=17315487329998392945
bad config type 16 for stats
children[0]
type=''raidz''
id=0
guid=14250359679717261360
nparity=2
metaslab_array=24
metaslab_shift=37
ashift=9
asize=14002698321920
is_log=0
root at storage:~# zdb tank
version=14
name=''tank''
state=0...
2008 Jun 05
6
slog / log recovery is here!
(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a running
system. My
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if
it is vdev specific, or pool wide. Google didn''t seem to know.
I''m considering a mixed pool with some "advanced format" (4KB sector)
drives, and some normal 512B sector drives, and was wondering if the ashift
can be set per vdev, or only per pool. Theoretically, this would save me
some size on
2011 Nov 05
4
ZFS Recovery: What do I try next?
...me: ''gir''
vdev_tree:
type: ''root''
id: 0
guid: 3936305481264476979
children[0]:
type: ''raidz''
id: 0
guid: 10967243523656644777
nparity: 1
metaslab_array: 23
metaslab_shift: 35
ashift: 9
asize: 6001161928704
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0...
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
...????hostname=''fileserver011''
????vdev_tree
????????type=''root''
????????id=0
????????guid=14464037545511218493
????????children[0]
????????????????type=''raidz''
????????????????id=0
????????????????guid=179558698360846845
????????????????nparity=1
????????????????metaslab_array=13
????????????????metaslab_shift=37
????????????????ashift=9
????????????????asize=20914156863488
????????????????is_log=0
????????????????children[0]
????????????????????????type=''disk''
????????????????????????id=0
??????????????????????...
2011 Jan 04
0
zpool import hangs system
...91
pool_guid: 13362623912425247739
hostid: 945475
hostname: ''nexenta01''
top_guid: 18128310706829628764
guid: 5932373083307643211
vdev_children: 1
vdev_tree:
type: ''raidz''
id: 0
guid: 18128310706829628764
nparity: 2
metaslab_array: 23
metaslab_shift: 36
ashift: 9
asize: 7501443235840
is_log: 0
children[0]:
type: ''disk''
id: 0
guid: 5932373083307643211
path: ''/dev/dsk/c0t0d0s0''...
2013 Jan 08
3
pool metadata has duplicate children
I seem to have managed to end up with a pool that is confused abut its children disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://illumos.org/msg/ZFS-8000-72
scan: none requested
config:
NAME STATE
2008 Jan 10
2
Assistance needed expanding RAIDZ with larger drives
...hostname=''mammoth''
| vdev_tree
| type=''root''
| id=0
| guid=5629347939003043989
| children[0]
| type=''raidz''
| id=0
| guid=1325151684809734884
| nparity=1
| metaslab_array=14
| metaslab_shift=33
| ashift=9
| asize=1600289505280
| is_log=0
| children[0]
| type=''disk''
| id=0
|...
2010 May 16
9
can you recover a pool if you lose the zil (b134+)
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have tried a couple of the commands (zpool import -f and zpool import -FX
llift)
root at
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors.
The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14).
But still when trying to do zpool import