Hi,
On Thu, Nov 11, 2004 at 10:44:10PM -0500, Brian Szymanski
wrote:> After a long and happy time with vinum under 4.8 -> 4.10, I'm
finding
> things very broken in 5.3. The config I'm trying to accomplish is
> relatively simple, just a root mirrored volume configuration which worked
> under 4.x.
A root mirrored configuration isn't that straightforward, imo :) But it
should work.
> Using vinum, I lose state information for the drive on ad2 after reboot -
> M2 is shown in "vinum l" output only as "referenced"...
That is to be expected, as you discovered below...
> Browsing some mailing lists I found that gvinum is the way to go these
> days, so I changed to using geom_vinum/gvinum, and the information is
> retained across boot, but when I try to boot to the root volume, it says
> that the drive is not UFS. When I boot on another partition to look at the
> situation, I found that there were no entries in /dev for /dev/ad0s1a. I
> wanted to create a ad0s1a entry with mknod, but of course we've got
devfs
> now, so that didn't work. I'm stumped and not sure how to proceed.
Any
> ideas?
That's strange. What is the output of
fdisk ad0
bsdlabel ad0s1
Maybe something in GEOM gets confused...
If the disappearing device node problem is fixed, gvinum should work in
this case.
> I originally was trying a complex configuration like so:
> drive A 200G
> drive B 200G
> drive C 100G
> drive D 100G
>
> I set the concat of drive all of drives C+D to be a volume makeshift, and
> added drive definition like so:
> drive MS /dev/gvinum/makeshift
>
> Then, the idea was to do a raid5 of drives A, B, and "drive" MS.
I don't know if this is even possible. It's an interesting idea but even
if it
works, recovery is totally non-trivial when either drive C or drive D goes
away. Plus, you'll surely take a huge performance hit because of the two
vinum
layers you have to go through for every single access.
RAID-5 is normally used when you need redundancy but can't afford to shell
out
the cash for mirroring. If you don't care about loss of data you are better
off using stripe, if you are absolutely paranoid about your data you are
better off mirroring [1]. If you do need RAID-5 though, you have to do it
properly, which means evenly-sized disks. Do make sure they're not all from
the same manufacturing batch also...
> Unfortunately this caused a panic, which is less surprising. Does anyone
> know of another way to accomplish the same thing (Raid5 over two disks and
> 2 half sized disks concatenated together?) or a similar result?
The best you can do is have 100G of drives A & B participate in a 4 x 100G
RAID-5, then use the other 2 x 100G in a separate volume. You *could*
theoretically concat 2 plexes in one volume, where one plex is RAID-5 and the
other mirror or some such, but as said above, it's not easy to recover from
a
drive failure in that case, defeating the point of RAID imho.
HTH,
--Stijn
[1] all my opinion; there are lots of opinions on what type of RAID is
best in what situation. My point is mainly that you really need to
consider what you are trying to accomplish by using RAID.
--
"Linux has many different distributions, meaning that you can probably find
one that is exactly what you want (I even found one that looked like a Unix
system)."
-- Mike Meyer, from a posting at questions@freebsd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 187 bytes
Desc: not available
Url :
http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20041112/c9c63310/attachment.bin