Dear all, I'm happy that gptzfsloader will work with just about any zpool configuration you could imagine, but... We have an HP DL185 G5 with a P400 raid array, fully populated with 12 drives. Since there's no JBOD mode (or at least, not one you can get to from the BIOS configuration screens), the array is configured as 12 single disk RAID0 arrays. As I posted about previously, we had FreeBSD 8.1-STABLE installed on a 6 disk raidz1, and everything was happy. However, we were having some difficulty adding a second vdev -- another raidz1 using the other 6 drives. Well, to cut a long story short: eventually we did this by hot-plugging disks 7 -- 12 after FreeBSD was up and running. Everything was cool and dandy, and we had the server running on all drives after setting up gpt partition tables and doing a 'zpool add'. Until we tested rebooting. On attempted reboot, the loader reported 8 drives, and subsequently ZFS flailed with the dreaded "ZFS: i/o error - all block copies unavailable" error. Now, we've had a poke through FreeBSD sources, and as far as we can tell, FreeBSD will work with up to 31 devices being reported from the BIOS. Is this correct, and the limitation is in what the hardware is reporting to the loader at the early stages of booting? Any good tricks for getting round this sort of limitation? Our current plan is to set up a USB memstick with /boot on it, by adapting the instructions here: http://wiki.freebsd.org/RootOnZFS/UFSBoot -- which isn't ideal as the memstick will be a single point of failure. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate JID: matthew@infracaninophile.co.uk Kent, CT11 9PW -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 267 bytes Desc: OpenPGP digital signature Url : http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20101021/4ac86157/signature.pgp
On Thu, 21 Oct 2010 13:01:09 +0100 Matthew Seaman wrote:> Our current > plan is to set up a USB memstick with /boot on it, by adapting the > instructions here: http://wiki.freebsd.org/RootOnZFS/UFSBoot -- which > isn't ideal as the memstick will be a single point of failure.We always use gmirrored USB sticks. Works like a charm. -- WBR, Boris Samorodov (bsam) Research Engineer, http://www.ipt.ru Telephone & Internet SP FreeBSD Committer, http://www.FreeBSD.org The Power To Serve
On Thu, 21 Oct 2010 14:01:09 +0200, Matthew Seaman <m.seaman@infracaninophile.co.uk> wrote:> Dear all, > > I'm happy that gptzfsloader will work with just about any zpool > configuration you could imagine, but... > > We have an HP DL185 G5 with a P400 raid array, fully populated with 12 > drives. Since there's no JBOD mode (or at least, not one you can get to > from the BIOS configuration screens), the array is configured as 12 > single disk RAID0 arrays. As I posted about previously, we had FreeBSD > 8.1-STABLE installed on a 6 disk raidz1, and everything was happy. > However, we were having some difficulty adding a second vdev -- another > raidz1 using the other 6 drives. > > Well, to cut a long story short: eventually we did this by hot-plugging > disks 7 -- 12 after FreeBSD was up and running. Everything was cool and > dandy, and we had the server running on all drives after setting up gpt > partition tables and doing a 'zpool add'. > > Until we tested rebooting. > > On attempted reboot, the loader reported 8 drives, and subsequently ZFS > flailed with the dreaded "ZFS: i/o error - all block copies unavailable" > error. Now, we've had a poke through FreeBSD sources, and as far as we > can tell, FreeBSD will work with up to 31 devices being reported from > the BIOS. Is this correct, and the limitation is in what the hardware > is reporting to the loader at the early stages of booting? > > Any good tricks for getting round this sort of limitation? Our current > plan is to set up a USB memstick with /boot on it, by adapting the > instructions here: http://wiki.freebsd.org/RootOnZFS/UFSBoot -- which > isn't ideal as the memstick will be a single point of failure.I think I've encountered the same problem as you. In my configuration there is also HP server with HP SmartArray with six disks configured as single disk RAID0 logical units. I don't think it is related to size of the pool (try to test with small GPT partitions -- for me it also doesn't work). I think it is something with this specific hardware and ZFS. I will provide more details soon, but for know could you test following configurations: - mirror (it works for me), - raidz(2) (it doesn't work for me), - raidz(2) but without SmartArray controller -- adX or adaX disks (it works for me). Please try 'status' command while seeing "ZFS: i/o error..." message. -- am