Hi all, Forget the first post. My server is still somewhat unstable after the upgrade attempts, and I appologize. Here goes the real deal: I tried to upgrade my existing 5.2.1 to a 5.3. I ran into severe problems with my vinum setup in trying to do so. I think i can answer why, the reason is stated here: http://www.freebsd.org/releases/5.3R/errata.html More exactly: "31 Oct 2004, updated on 12 Nov 2004) The vinum(4) subsystem works on 5.3, but it can cause a system panic at boot time. As a workaround you can add vinum_load="YES" to /boot/loader.conf. As an alternative you can also use the new geom(4)-based vinum(4) subsystem. To activate the geom(4)-aware vinum at boot time, add geom_vinum_load="YES" to /boot/loader.conf and remove start_vinum="YES" in /etc/rc.conf if it exists. While some uncommon configurations, such as multiple vinum drives on a disk, are not supported, it is generally backward compatible. Note that for the geom(4)-aware vinum, its new userland control program, gvinum, should be used, and it is not yet feature-complete." I think I have to disagree calling muliple drives on a disk "uncommon". In fact, I think I remember that being the way it was demonstrated in an old version of the handbook. Here is my current setup after rolling back to FreeBSD 5.2.1: 3 drives: D elben State: up /dev/da1s1h A: 0/7825 MB (0%) D donau State: up /dev/da0s1h A: 0/7825 MB (0%) D spree State: up /dev/ad4a A: 3/114473 MB (0%) 5 volumes: V var State: up Plexes: 2 Size: 600 MB V tmp State: up Plexes: 2 Size: 600 MB V home State: up Plexes: 2 Size: 1000 MB V usr State: up Plexes: 2 Size: 5625 MB V data01 State: up Plexes: 1 Size: 111 GB 9 plexes: P var.p0 C State: up Subdisks: 1 Size: 600 MB P tmp.p0 C State: up Subdisks: 1 Size: 600 MB P home.p0 C State: up Subdisks: 1 Size: 1000 MB P usr.p0 C State: up Subdisks: 1 Size: 5625 MB P var.p1 C State: up Subdisks: 1 Size: 600 MB P tmp.p1 C State: up Subdisks: 1 Size: 600 MB P home.p1 C State: up Subdisks: 1 Size: 1000 MB P usr.p1 C State: up Subdisks: 1 Size: 5625 MB P data01.p0 C State: up Subdisks: 1 Size: 111 GB 9 subdisks: S var.p0.s0 State: up D: donau Size: 600 MB S tmp.p0.s0 State: up D: donau Size: 600 MB S home.p0.s0 State: up D: donau Size: 1000 MB S usr.p0.s0 State: up D: donau Size: 5625 MB S var.p1.s0 State: up D: elben Size: 600 MB S tmp.p1.s0 State: up D: elben Size: 600 MB S home.p1.s0 State: up D: elben Size: 1000 MB S usr.p1.s0 State: up D: elben Size: 5625 MB S data01.p0.s0 State: up D: spree Size: 111 GB The donau and elben are two identical disks containing all of the system partitions. As far as I can tell, the new 5.3 release makes this disk configuration invalid? If yes, is that a permanent decision, or something that will change in near future say 5.4? If not I have a major problem here :-( -- With regards / med venlig hilsen Nikolaj Hansen Algade 15, 2 tv 9000 Aalborg Danmark "Even on the highest throne in the world, we are seated, still, upon our arses." - Montaigne
Am Samstag, 18. Dezember 2004 20:52 schrieb Nikolaj Hansen:> [...] > I think I have to disagree calling muliple drives on a disk > "uncommon". In fact, I think I remember that being the way it was > demonstrated in an old version of the handbook. Here is my current > setup after rolling back to FreeBSD 5.2.1: > > 3 drives: > D elben State: up /dev/da1s1h \ > A: 0/7825 MB (0%) > D donau State: up /dev/da0s1h \ > A: 0/7825 MB (0%) > D spree State: up /dev/ad4a > A: 3/114473 MB (0%) > [...] > As far as I can tell, the new 5.3 release makes this disk > configuration invalid? > > If yes, is that a permanent decision, or something that will change > in near future say 5.4? > > If not I have a major problem here :-(I don't undestand your excitement... :-) You have three (vinum) drives on three seperate (physical) disks. On these drives are several concat-plexes. Nothing here violates the requirements for the GEOM-based vinum, if your old vinum-type partitions don't start with an offest of "0" (zero) within the slice (da0/da1) or disk (ad0) respectively. *If* that's the case (i.e. Offset of 0 for the vinum partitions), you have a problem indeed but otherwise I would not expect any problems. -- Ciao/BSD - Matthias Matthias Schuendehuette <msch [at] snafu.de>, Berlin (Germany) PGP-Key at <pgp.mit.edu> and <wwwkeys.de.pgp.net> ID: 0xDDFB0A5F
On Sat, 2004-12-18 at 20:52 +0100, Nikolaj Hansen wrote:> While some uncommon configurations, such as multiple vinum drives on a > disk, are not supported, it is generally backward compatible. Note that > for the geom(4)-aware vinum, its new userland control program, gvinum, > should be used, and it is not yet feature-complete." > > I think I have to disagree calling muliple drives on a disk "uncommon". > In fact, I think I remember that being the way it was demonstrated in an > old version of the handbook. Here is my current setup after rolling back > to FreeBSD 5.2.1:I think you are misunderstanding things a little, here. It's not that multiple vinum volumes per disk can't be handled, but instead it's multiple vinum configurations per disk that are problematic. In other words, I believe it's not supported to have, say, a /dev/da0s1g "vinum" partition (containing vinum volumes, plexes, and subdisks) and also, say, a /dev/da0s1h "vinum" partition (again, containing vinum volumes, plexes, and subdisks). Such a setup was okay under the old vinum, but is not okay under geom_vinum (AFAIK).> As far as I can tell, the new 5.3 release makes this disk configuration > invalid?The vinum configuration you listed appears fine for geom_vinum. I transitioned by old root-on-vinum all-mirrored setup over to geom_vinum without any problems. (Yours looks the same, except that you also have a third drive with a single concat plex volume on it.)> If not I have a major problem here :-(The biggest problem you'll have is if your system suffers the ATA "TIMEOUT - WRITE_DMA" woe that bedevils some of us under 5.3. When that happens, your mirror will be knocked into a degraded state (half of your mirrored plexes will be marked down) even though the drive is okay. Unfortunately, without "setstate" being implemented in "gvinum" to mark the drive as up, thereby allowing you to issue "gvinum start"s for the "downed" plexes, there's little more you can do to get the "failed" drive recognised as being in the "up" state other than to reboot. (You might be able to use atacontrol to stop/start or otherwise reset the drive; in my particular system I can't use atacontrol detach/attach because they're both on the same channel.) At any rate, every so often, when this happens, you'll have to resynchronise the "failed" plexes, which *really* sucks the I/O life out of the system because there's no way to throttle back reconstruction, unlike with geom_mirror (which has two sysctls to govern the load imposed by resynchronisation). But, it looks like you're lucky, because your mirrored drives are SCSI. I don't know about your ATA concat plex volume, though... Cheers, Paul. -- e-mail: paul@gromit.dlib.vt.edu "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." --- Frank Vincent Zappa