Hi: What is the most common practice for allocating (choosing) the two disks used for the boot drives, in a zfs root install, for the mirrored rpool? The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1, which are sata3/0 and sata/3/4, which most often map to c5t0d0 and c5t4d0. But those are on the same controller (yes, I''ve read all that before). And these seem to be the ones that BIOS agrees to boot from. However, the doc below, in section; http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#ZFS_Configuration_Example_.28x4500_with_raidz2.29 mentions using two boot disks for the zfs root on a different controller; zpool create mpool mirror c5t0d0s0 c4t0d0s0 I''ll assume that they meant "rpool" instead of mpool. I had thought that BIOS will only agree to boot from the slot 0 and slot 1 disks which are on the same controller. Does anyone know which doc is correct, and what two disk devices are typically being used for the zfs root these days? If I stick with the x4500 docs and use c5t0d0 and c5t4d0, they both can be booted from bios, but it makes doing remaining raidz2 data pool a little trickier. 7 sets of 6-disk raidz2, can''t get all vdevs on different controller number. But if I use the example from SolarisInternals.com guide above, with both of the zfs root pool disks on different controllers, it makes it easier to allocate remaining vdevs for the "7 sets of 6-disk raidz2", but I can''t see how BIOS could select both of those boot devices? Sincere Thanks, Neal
Cindy.Swearingen at Sun.COM
2009-Mar-19 22:16 UTC
[zfs-discuss] X4500 Thumper, config for boot disks?
Hi Neal, This example needs to be updated with a ZFS root pool. It could also be that I mapped the wrong boot disks in this example. You can name the root pool what ever you want, rpool, mpool, mypool. In these examples, I was using rpool for RAIDZ pool and mpool for mirrored pool, not knowing that rpool would become the default name of the root pool that is created by the install program. You will have to use the boot disks for (I hope) a mirrored root pool and use the remaining disks/controllers for the remaining raidz/mirrored data pool as best you can. When I have time to model these configs, I''ll update the examples. I''m sure the experts on this list have some ideas... Cindy Neal Pollack wrote:> Hi: > > What is the most common practice for allocating (choosing) the two disks > used for > the boot drives, in a zfs root install, for the mirrored rpool? > > The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1, > which are sata3/0 and sata/3/4, which most often map to c5t0d0 and c5t4d0. > But those are on the same controller (yes, I''ve read all that before). > And these seem to be the ones that BIOS agrees to boot from. > > However, the doc below, in section; > http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#ZFS_Configuration_Example_.28x4500_with_raidz2.29 > > > mentions using two boot disks for the zfs root on a different controller; > zpool create mpool mirror c5t0d0s0 c4t0d0s0 > > I''ll assume that they meant "rpool" instead of mpool. I had thought > that BIOS > will only agree to boot from the slot 0 and slot 1 disks which are on > the same > controller. > Does anyone know which doc is correct, and what two disk devices > are typically being used for the zfs root these days? > > If I stick with the x4500 docs and use c5t0d0 and c5t4d0, they both > can be booted from bios, but it makes doing remaining raidz2 data pool > a little trickier. 7 sets of 6-disk raidz2, can''t get all vdevs on > different > controller number. > > But if I use the example from SolarisInternals.com guide above, with > both of the zfs root pool disks on different controllers, it makes it > easier > to allocate remaining vdevs for the "7 sets of 6-disk raidz2", but I > can''t see > how BIOS could select both of those boot devices? > > Sincere Thanks, > > Neal > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Neal Pollack wrote:> Hi: > > What is the most common practice for allocating (choosing) the two > disks used for > the boot drives, in a zfs root install, for the mirrored rpool? > > The docs for thumper, and many blogs, always point at cfgadm slots 0 > and 1, > which are sata3/0 and sata/3/4, which most often map to c5t0d0 and > c5t4d0. > But those are on the same controller (yes, I''ve read all that before). > And these seem to be the ones that BIOS agrees to boot from. > > However, the doc below, in section; > http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#ZFS_Configuration_Example_.28x4500_with_raidz2.29 > > > mentions using two boot disks for the zfs root on a different controller; > zpool create mpool mirror c5t0d0s0 c4t0d0s0 > > I''ll assume that they meant "rpool" instead of mpool. I had thought > that BIOS > will only agree to boot from the slot 0 and slot 1 disks which are on > the same > controller. > Does anyone know which doc is correct, and what two disk devices > are typically being used for the zfs root these days?It depends on your BIOS. AFAIK, there is no way for the BIOS to tell the installer which disks are valid boot disks. For OBP (SPARC) systems, you can have the installer know which disks are available for booting.> > If I stick with the x4500 docs and use c5t0d0 and c5t4d0, they both > can be booted from bios, but it makes doing remaining raidz2 data pool > a little trickier. 7 sets of 6-disk raidz2, can''t get all vdevs on > different > controller number. > > But if I use the example from SolarisInternals.com guide above, with > both of the zfs root pool disks on different controllers, it makes it > easier > to allocate remaining vdevs for the "7 sets of 6-disk raidz2", but I > can''t see > how BIOS could select both of those boot devices?Do you think it matters for the availability of data which controller is used? The answer, for availability, in a system like x4500, is to use only one controller, but you have 6, because they don''t make a x48 SATA controller. In other words, don''t worry about controllers on a machine like the x4500 when you are considering data availability. Do worry about the disks, use double parity if you can, single parity otherwise. -- richard
Robert Milkowski
2009-Mar-24 17:33 UTC
[zfs-discuss] X4500 Thumper, config for boot disks?
Hello Richard, Friday, March 20, 2009, 12:23:40 AM, you wrote: RE> It depends on your BIOS. AFAIK, there is no way for the BIOS to RE> tell the installer which disks are valid boot disks. For OBP (SPARC) RE> systems, you can have the installer know which disks are available RE> for booting. IIRC biosdev can actually extract such information. Caiman marks such disks as bootable and marks them separately (there''s an RFE to better present them in cases like x4500 where boot disks are hard to find among so many disks drives) -- Best regards Robert Milkowski http://milek.blogspot.com