search for: metadb

Displaying 6 results from an estimated 6 matches for "metadb".

Did you mean: metabb
2009 Jun 02
0
zfs - zvol as a metadb repository..
This looks like the same issue as: 6829176 dd w/ large block size fails on a zvol''s character device (metadb usesthe character interface even if you specify the block one). - Eric -- This message posted from opensolaris.org
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41) d3=/var (sub mirrors d13,d23,d33,d43) For ZFS we have a single Raidz set across all four disks s5 E...
2009 Jan 22
3
Failure to boot from zfs on Sun v880
...d13 s 16GB c1t1d0s3 /var d5 m 8.0GB d15 d15 s 16GB c1t0d0s1 d25 s 16GB c1t1d0s1 I removed c1t1d0 from the mirror: # metadetach d4 d24 # metaclear d24 # metadetach d3 d23 # metaclear d23 # metadetach d5 d25 # metaclear s25 then removed the metadb from c1d1d0s7 # metadb -d c1t1d0s7 Resized s0 on c1t1d0 to include the whole disc and relabelled with an SMI label. Created the zfs root pool: # zpool create rpool c1t1d0s0 Created new BE: # lucreate -c Sol11_b96 -n Sol11_b96_zfs -p rpool This ran fine, so I activated the new BE and rebooted...
2006 Jul 13
7
system unresponsive after issuing a zpool attach
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subsequent data migration went fine. However, when I attempted to attach the second side mirrors as a mirror of the ZFS pool, all
2006 Jan 27
2
Do I have a problem? (longish)
...ting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s7 contains an SVM mdb. Please see metadb(1M). format> partition partition> print Current partition table (original): Total disk cylinders available: 7506 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 home wm 0 - 3783 8.50GB (3784/0/0) 17830208 1 home...