Displaying 7 results from an estimated 7 matches for "c0t0d0s4".
Did you mean:
c0t0d0s0
2008 Jul 06
14
confusion and frustration with zpool
...local DEGRADED 0 0 0
mirror ONLINE 0 0 0
c6d1p0 ONLINE 0 0 0
c0t0d0s3 ONLINE 0 0 0
mirror ONLINE 0 0 0
c6d0p0 ONLINE 0 0 0
c0t0d0s4 ONLINE 0 0 0
mirror UNAVAIL 0 0 0 corrupted data
c8t0d0p0 ONLINE 0 0 0
c0t0d0s5 ONLINE 0 0 0
errors: No known data errors
-bash-3.2# zpool history local
History for ''local'':
2007-11-...
2010 Jan 13
3
Recovering a broken mirror
...there any way to recover from this or are they SOL?
Thanks in advance
# zpool status
no pools available
# zpool import
# ls /etc/zfs
#
ls /dev/dsk
c0t0d0s0 c0t0d0s3 c0t0d0s6 c1t0d0s1 c1t0d0s4 c1t0d0s7 c1t1d0s2 c1t1d0s5 c1t2d0 c1t2d0s2 c1t2d0s5 c1t3d0s0 c1t3d0s3 c1t3d0s6
c0t0d0s1 c0t0d0s4 c0t0d0s7 c1t0d0s2 c1t0d0s5 c1t1d0s0 c1t1d0s3 c1t1d0s6 c1t2d0s0 c1t2d0s3 c1t2d0s6 c1t3d0s1 c1t3d0s4
c0t0d0s2 c0t0d0s5 c1t0d0s0 c1t0d0s3 c1t0d0s6 c1t1d0s1 c1t1d0s4 c1t1d0s7 c1t2d0s1 c1t2d0s4 c1t3d0 c1t3d0s2 c1t3d0s5
# format
Searching for disks...done
AVAILABLE DISK SELECT...
2006 Sep 15
1
[Blade 150] ZFS: extreme low performance
....
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror ONLINE 0 0 0
c0t0d0s4 ONLINE 0 0 0
c0t2d0s4 ONLINE 0 0 0
Then i created a ZFS with no extra options:
# zfs create mypool/zfs01
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 106K 27,8G 25,5K /mypool
mypool/zfs01 24,5K 27,8G...
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They''re all supermicro-based with retail LSI cards.
I''ve noticed a tendency for things to go a little bonkers during the
weekly scrub (they all scrub over the weekend), and that''s when I''ll
lose a disk here and there. OK, fine, that''s sort of the point, and
they''re
2005 Oct 26
1
Error message with fbt::copen:entry probe
All,
The attached script is causing the following error message ...
bash-3.00# ./zmon_bug.d
dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry):
invalid address (0xfd91747f) in predicate at DIF offset 120
dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry):
invalid address (0xfef81a3f) in predicate at DIF offset 120
Any ideas?
thxs
Joe
--
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent data migration went fine.
However, when I attempted to attach the second side mirrors as a mirror
of the ZFS pool, all
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi,
my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data)
thanks in advance for