Displaying 4 results from an estimated 4 matches for "metaclear".
Did you mean:
getclear
2006 Jul 13
7
system unresponsive after issuing a zpool attach
...of
errors on the console about failed memory allocations.
Any thoughts/suggestions?
The data I migrated consisted of about 80GB. Here''s the general flow of
what I did:
1. break the SVM mirrors
metadetach d5 d51
metadetach d6 d61
metadetach d7 d71
2. remove the SVM mirrors
metaclear d51
metaclear d61
metaclear d71
3. combine the partitions with format. They were contiguous
partitions on s4, s5 & s6 of the disk, I just made a single
partition on s4 and cleared s5 & s6.
4. create the pool
zpool create storage cXtXdXs4
5. create three filesystems
zfs cre...
2009 Jan 22
3
Failure to boot from zfs on Sun v880
...24 s 35GB c1t1d0s0
swap
d3 m 16GB d13
d13 s 16GB c1t0d0s3
d13 s 16GB c1t1d0s3
/var
d5 m 8.0GB d15
d15 s 16GB c1t0d0s1
d25 s 16GB c1t1d0s1
I removed c1t1d0 from the mirror:
# metadetach d4 d24
# metaclear d24
# metadetach d3 d23
# metaclear d23
# metadetach d5 d25
# metaclear s25
then removed the metadb from c1d1d0s7
# metadb -d c1t1d0s7
Resized s0 on c1t1d0 to include the whole disc and relabelled with an
SMI label.
Created the zfs root pool:
# zpool create rpool c1t1d0s0
Created new BE:
# luc...
2006 Jan 27
2
Do I have a problem? (longish)
...scribe the situation. I have 4 disks in a zfs/svm config:
c2t9d0 9G
c2t10d0 9G
c2t11d0 18G
c2t12d0 18G
c2t11d0 is devided in two:
selecting c2t11d0
[disk formatted]
/dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M).
/dev/dsk/c2t11d0s2 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s7 contains an SVM mdb. Please see metadb(1M).
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 7506 + 2 (reserved cylinders)
Part Tag Flag...
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all-
I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1