Anonymous Remailer (austria)
2011-Dec-15 23:20 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Solaris 10 If I install using ZFS root on only one drive is there a way to add another drive as a mirror later? Sorry if this was discussed already. I searched the archives and couldn''t find the answer. Thank you.
Alan Hargreaves
2011-Dec-15 23:30 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
You do it as you would any zpool. Mirroring is OK for the zpool. It''s just things like raidz* and concats that are not. # zpool attach rpool device Note the use of attach. "add" will try to make a concat. Regards, Alan Hargreaves On 12/16/11 10:20, Anonymous Remailer (austria) wrote:> On Solaris 10 If I install using ZFS root on only one drive is there a way > to add another drive as a mirror later? Sorry if this was discussed > already. I searched the archives and couldn''t find the answer. Thank you. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- <http://www.oracle.com> * Hardware and Software, Engineered to Work Together* Alan Hargreaves | Senior Principal Technical Support Engineer | Principal Field Technologist Solaris and Networking | Global Systems Support Email: alan.hargreaves at oracle.com <mailto:alan.hargreaves at oracle.com> Blog: alanhargreaves.wordpress.com <http://alanhargreaves.wordpress.com> Oracle Global Customer Services Log, update, and monitor your Service Request online using My Oracle Support <http://support.oracle.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111216/a41be8ce/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: .oracle-email-sig.gif Type: image/gif Size: 658 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111216/a41be8ce/attachment.gif> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3763 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111216/a41be8ce/attachment.bin>
Cindy Swearingen
2011-Dec-15 23:39 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 Thanks, Cindy On 12/15/11 16:20, Anonymous Remailer (austria) wrote:> > On Solaris 10 If I install using ZFS root on only one drive is there a way > to add another drive as a mirror later? Sorry if this was discussed > already. I searched the archives and couldn''t find the answer. Thank you. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
John D Groenveld
2011-Dec-15 23:40 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
In message <a8a35e221e94fdd99d3467bf6113d0a6 at remailer.privacy.at>, "Anonymous R emailer (austria)" writes:>On Solaris 10 If I install using ZFS root on only one drive is there a way >to add another drive as a mirror later? Sorry if this was discussed >already. I searched the archives and couldn''t find the answer. Thank you.<URL:http://docs.oracle.com/cd/E23823_01/html/819-5461/ggset.html#gkdep> | How to Create a Mirrored ZFS Root Pool (Postinstallation) John groenveld at acm.org
Yes, except if your root pool is on a USB stick or removable media. On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer (austria) < mixmaster at remailer.privacy.at> wrote:> > On Solaris 10 If I install using ZFS root on only one drive is there a way > to add another drive as a mirror later? Sorry if this was discussed > already. I searched the archives and couldn''t find the answer. Thank you. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111215/e1cf3cde/attachment-0001.html>
It can still be done for USB, but you have to boot from alternate media to attach the mirror. On Thu, Dec 15, 2011 at 3:41 PM, Frank Cusack <frank at linetwo.net> wrote:> Yes, except if your root pool is on a USB stick or removable media. > > > On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer (austria) < > mixmaster at remailer.privacy.at> wrote: > >> >> On Solaris 10 If I install using ZFS root on only one drive is there a way >> to add another drive as a mirror later? Sorry if this was discussed >> already. I searched the archives and couldn''t find the answer. Thank you. >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111215/75cb7e49/attachment.html>
Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com> wrote:> Hi Anon, > > The disk that you attach to the root pool will need an SMI label > and a slice 0. > > The syntax to attach a disk to create a mirrored root pool > is like this, for example: > > # zpool attach rpool c1t0d0s0 c1t1d0s0 > > Thanks, > > Cindy > > On 12/15/11 16:20, Anonymous Remailer (austria) wrote: > >> >> On Solaris 10 If I install using ZFS root on only one drive is there a way >> to add another drive as a mirror later? Sorry if this was discussed >> already. I searched the archives and couldn''t find the answer. Thank you. >> ______________________________**_________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >> > ______________________________**_________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111215/46a44f5c/attachment.html>
Gregg Wonderly
2011-Dec-16 07:27 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Cindy, will it ever be possible to just have attach mirror the surfaces, including the partition tables? I spent an hour today trying to get a new mirror on my root pool. There was a 250GB disk that failed. I only had a 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work in this case and so you have to do the partitioning by hand, which is just silly to fight with anyway. Gregg Sent from my iPhone On Dec 15, 2011, at 6:13 PM, Tim Cook <tim at cook.ms> wrote:> Do you still need to do the grub install? > > On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com> wrote: > Hi Anon, > > The disk that you attach to the root pool will need an SMI label > and a slice 0. > > The syntax to attach a disk to create a mirrored root pool > is like this, for example: > > # zpool attach rpool c1t0d0s0 c1t1d0s0 > > Thanks, > > Cindy > > On 12/15/11 16:20, Anonymous Remailer (austria) wrote: > > On Solaris 10 If I install using ZFS root on only one drive is there a way > to add another drive as a mirror later? Sorry if this was discussed > already. I searched the archives and couldn''t find the answer. Thank you. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111216/b11273f5/attachment.html>
You can just do fdisk to create a single large partition. The attached mirror doesn''t have to be the same size as the first component. On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly <greggwon at gmail.com> wrote:> Cindy, will it ever be possible to just have attach mirror the surfaces, > including the partition tables? I spent an hour today trying to get a new > mirror on my root pool. There was a 250GB disk that failed. I only had a > 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work in this > case and so you have to do the partitioning by hand, which is just silly to > fight with anyway. > > Gregg > > Sent from my iPhone > > On Dec 15, 2011, at 6:13 PM, Tim Cook <tim at cook.ms> wrote: > > Do you still need to do the grub install? > On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com> > wrote: > >> Hi Anon, >> >> The disk that you attach to the root pool will need an SMI label >> and a slice 0. >> >> The syntax to attach a disk to create a mirrored root pool >> is like this, for example: >> >> # zpool attach rpool c1t0d0s0 c1t1d0s0 >> >> Thanks, >> >> Cindy >> >> On 12/15/11 16:20, Anonymous Remailer (austria) wrote: >> >>> >>> On Solaris 10 If I install using ZFS root on only one drive is there a >>> way >>> to add another drive as a mirror later? Sorry if this was discussed >>> already. I searched the archives and couldn''t find the answer. Thank you. >>> ______________________________**_________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >>> >> ______________________________**_________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss<http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111216/62e58660/attachment-0001.html>
Andrew Gabriel
2011-Dec-16 08:56 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On 12/16/11 07:27 AM, Gregg Wonderly wrote:> Cindy, will it ever be possible to just have attach mirror the > surfaces, including the partition tables? I spent an hour today > trying to get a new mirror on my root pool. There was a 250GB disk > that failed. I only had a 1.5TB handy as a replacement. prtvtoc ... > | fmthard does not work in this caseCan you be more specific why it fails? I have seen a couple of cases, and I''m wondering if you''re hitting the same thing. Can you post the prtvtoc output of your original disk please?> and so you have to do the partitioning by hand, which is just silly to > fight with anyway. > > Gregg-- Andrew Gabriel
Anonymous Remailer (austria)
2011-Dec-16 12:41 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Thank you all for your answers and links :-)
Cindy Swearingen
2011-Dec-16 16:38 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Hi Tim, No, in current Solaris releases the boot blocks are installed automatically with a zpool attach operation on a root pool. Thanks, Cindy On 12/15/11 17:13, Tim Cook wrote:> Do you still need to do the grub install? > > On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com > <mailto:cindy.swearingen at oracle.com>> wrote: > > Hi Anon, > > The disk that you attach to the root pool will need an SMI label > and a slice 0. > > The syntax to attach a disk to create a mirrored root pool > is like this, for example: > > # zpool attach rpool c1t0d0s0 c1t1d0s0 > > Thanks, > > Cindy > > On 12/15/11 16:20, Anonymous Remailer (austria) wrote: > > > On Solaris 10 If I install using ZFS root on only one drive is > there a way > to add another drive as a mirror later? Sorry if this was discussed > already. I searched the archives and couldn''t find the answer. > Thank you. > _________________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss > <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> > > _________________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss > <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >
Cindy Swearingen
2011-Dec-16 16:44 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Hi Gregg, Yes, fighting with partitioning is just silly. Santa will bring us bootable GPT/EFI labels in the coming year is my wish so you will be able to just attach disks to root pools. Send us some output so we can see what the trouble is. In the meantime, the links below might help. Thanks, Cindy http://docs.oracle.com/cd/E23824_01/html/821-1459/disksprep-34.html http://docs.oracle.com/cd/E23824_01/html/821-1459/diskssadd-2.html#diskssadd-5 http://docs.oracle.com/cd/E23824_01/html/821-1459/disksxadd-2.html#disksxadd-30 On 12/16/11 00:27, Gregg Wonderly wrote:> Cindy, will it ever be possible to just have attach mirror the surfaces, > including the partition tables? I spent an hour today trying to get a > new mirror on my root pool. There was a 250GB disk that failed. I only > had a 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work > in this case and so you have to do the partitioning by hand, which is > just silly to fight with anyway. > > Gregg > > Sent from my iPhone > > On Dec 15, 2011, at 6:13 PM, Tim Cook <tim at cook.ms <mailto:tim at cook.ms>> > wrote: > >> Do you still need to do the grub install? >> >> On Dec 15, 2011 5:40 PM, "Cindy Swearingen" >> <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: >> >> Hi Anon, >> >> The disk that you attach to the root pool will need an SMI label >> and a slice 0. >> >> The syntax to attach a disk to create a mirrored root pool >> is like this, for example: >> >> # zpool attach rpool c1t0d0s0 c1t1d0s0 >> >> Thanks, >> >> Cindy >> >> On 12/15/11 16:20, Anonymous Remailer (austria) wrote: >> >> >> On Solaris 10 If I install using ZFS root on only one drive is >> there a way >> to add another drive as a mirror later? Sorry if this was >> discussed >> already. I searched the archives and couldn''t find the answer. >> Thank you. >> _________________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss >> <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >> >> _________________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss >> <http://mail.opensolaris.org/mailman/listinfo/zfs-discuss> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gregg Wonderly
2011-Dec-16 16:44 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
The issue is really quite simple. The solaris install, on x86 at least, chooses to use slice-0 for the root partition. That slice is not created by a default format/fdisk, and so we have the web strewn with prtvtoc path/to/old/slice2 | fmthard -s - path/to/new/slice2 As a way to cause the two commands to "access" the entire disk. If you have to use dissimilar sized disks because 1) that''s the only media you have, or 2) you want to increase the size of your root pool, then all we end up with, is an error message about overlapping partitions and no ability to make progress. If I then use dd if=/dev/zero to erase the front of the disk, and the fire up format, select fdisk, say yes to create solaris2 partitioning, and then use partition to add a slice 0, I will have problems getting the whole disk in play. So, the end result, is that I have to jump through hoops, when in the end, I''d really like to just add the whole disk, every time. If I say zpool attach rpool c8t0d0s0 c12d1 I really do mean the whole disk, and I''m not sure why it can''t "just happen". Failing to type a "slice" reference, is no worse of a ''typo'' than typing ''s2'' by accident, because that''s what I''ve been typing with all the other commands to try and get the disk partitioned. I just really think there''s not a lot of value in all of this, especially with ZFS, where we can, in fact add more disks/vdevs to a keep expanding space, and extremely rarely is that going to be done, for the root pool, with fractions of disks. The use of SMI and absolute refusal to use EFI partitioning plus all of this just stacks up to a pretty large barrier to "simple" and/or "easy" administration. I''m very nervous when I have a simplex filesystem setting there, and when a disk has "died", I''m doubly nervous that the other half is going to fall over. I''m not trying to be hard nosed about this, I''m just trying to share my angst and frustration with the details that drove me in that direction. Gregg Wonderly On 12/16/2011 2:56 AM, Andrew Gabriel wrote:> On 12/16/11 07:27 AM, Gregg Wonderly wrote: >> Cindy, will it ever be possible to just have attach mirror the surfaces, >> including the partition tables? I spent an hour today trying to get a new >> mirror on my root pool. There was a 250GB disk that failed. I only had a >> 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work in this case > > Can you be more specific why it fails? > I have seen a couple of cases, and I''m wondering if you''re hitting the same > thing. > Can you post the prtvtoc output of your original disk please? > >> and so you have to do the partitioning by hand, which is just silly to fight >> with anyway. >> >> Gregg >
Cindy Swearingen
2011-Dec-16 17:23 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Yep, well said, understood, point taken, I hear you, you''re preaching to the choir. Have faith in Santa. A few comments: 1. I need more info on the x86 install issue. I haven''t seen this problem myself. 2. We don''t use slice2 for anything and its not recommended. 3. The SMI disk is a long-standing boot requirement. We''re working on it. 4. Both the s10 and s11 installer can create a mirrored root pool so you don''t have to do this manually. If you do have do this manually in the S11 release, you can use this shortcut to slap on a new label but it does no error checking so make sure you have the right disk: # format -L vtoc -d c1t0d0 Unfortunately, this applies the default partition table, which might be a 129MB slice 0, so you still have to do the other 17 steps to create one large slice 0. I filed an RFE to do something like this: # format -L vtoc -a(ll) s0 c1t0d0 5. The overlapping partition error on x86 systems is a bug (unless they really are overlapping) and you can override it by using the -f option. Thanks, Cindy On 12/16/11 09:44, Gregg Wonderly wrote:> The issue is really quite simple. The solaris install, on x86 at least, > chooses to use slice-0 for the root partition. That slice is not created > by a default format/fdisk, and so we have the web strewn with > > prtvtoc path/to/old/slice2 | fmthard -s - path/to/new/slice2 > > As a way to cause the two commands to "access" the entire disk. If you > have to use dissimilar sized disks because 1) that''s the only media you > have, or 2) you want to increase the size of your root pool, then all we > end up with, is an error message about overlapping partitions and no > ability to make progress. > > If I then use dd if=/dev/zero to erase the front of the disk, and the > fire up format, select fdisk, say yes to create solaris2 partitioning, > and then use partition to add a slice 0, I will have problems getting > the whole disk in play. > > So, the end result, is that I have to jump through hoops, when in the > end, I''d really like to just add the whole disk, every time. If I say > > zpool attach rpool c8t0d0s0 c12d1 > > I really do mean the whole disk, and I''m not sure why it can''t "just > happen". Failing to type a "slice" reference, is no worse of a ''typo'' > than typing ''s2'' by accident, because that''s what I''ve been typing with > all the other commands to try and get the disk partitioned. > > I just really think there''s not a lot of value in all of this, > especially with ZFS, where we can, in fact add more disks/vdevs to a > keep expanding space, and extremely rarely is that going to be done, for > the root pool, with fractions of disks. > > The use of SMI and absolute refusal to use EFI partitioning plus all of > this just stacks up to a pretty large barrier to "simple" and/or "easy" > administration. > > I''m very nervous when I have a simplex filesystem setting there, and > when a disk has "died", I''m doubly nervous that the other half is going > to fall over. > > I''m not trying to be hard nosed about this, I''m just trying to share my > angst and frustration with the details that drove me in that direction. > > Gregg Wonderly > > On 12/16/2011 2:56 AM, Andrew Gabriel wrote: >> On 12/16/11 07:27 AM, Gregg Wonderly wrote: >>> Cindy, will it ever be possible to just have attach mirror the >>> surfaces, including the partition tables? I spent an hour today >>> trying to get a new mirror on my root pool. There was a 250GB disk >>> that failed. I only had a 1.5TB handy as a replacement. prtvtoc ... | >>> fmthard does not work in this case >> >> Can you be more specific why it fails? >> I have seen a couple of cases, and I''m wondering if you''re hitting the >> same thing. >> Can you post the prtvtoc output of your original disk please? >> >>> and so you have to do the partitioning by hand, which is just silly >>> to fight with anyway. >>> >>> Gregg >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Pawel Jakub Dawidek
2011-Dec-18 11:52 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:> Hi Anon, > > The disk that you attach to the root pool will need an SMI label > and a slice 0. > > The syntax to attach a disk to create a mirrored root pool > is like this, for example: > > # zpool attach rpool c1t0d0s0 c1t1d0s0BTW. Can you, Cindy, or someone else reveal why one cannot boot from RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code would have to be licensed under GPL as the rest of the boot code? I''m asking, because I see no technical problems with this functionality. Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev pools works just fine on FreeBSD for a long time now. Not being forced to have dedicated pool just for the root if you happen to have more than two disks in you box is very convenient. -- Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111218/d505a0c8/attachment.bin>
Fajar A. Nugraha
2011-Dec-18 12:24 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek <pjd at freebsd.org> wrote:> BTW. Can you, Cindy, or someone else reveal why one cannot boot from > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code > would have to be licensed under GPL as the rest of the boot code? > > I''m asking, because I see no technical problems with this functionality. > Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev > pools works just fine on FreeBSD for a long time now.Really? How do they do that? In Linux, you can boot from disks with GPT label with grub2, and have "/" on raidz, but only as long as /boot is on grub2-compatible fs (e.g. single or mirrored zfs pool, ext4, etc). -- Fajar
Pawel Jakub Dawidek
2011-Dec-18 12:30 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Sun, Dec 18, 2011 at 07:24:27PM +0700, Fajar A. Nugraha wrote:> On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek <pjd at freebsd.org> wrote: > > BTW. Can you, Cindy, or someone else reveal why one cannot boot from > > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code > > would have to be licensed under GPL as the rest of the boot code? > > > > I''m asking, because I see no technical problems with this functionality. > > Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev > > pools works just fine on FreeBSD for a long time now. > > Really? How do they do that?Well, the boot code has access to all the disks, so it is just matter of being able to intepret the data, which our boot code can do.> In Linux, you can boot from disks with GPT label with grub2, and have > "/" on raidz, but only as long as /boot is on grub2-compatible fs > (e.g. single or mirrored zfs pool, ext4, etc).This is not the same. On FreeBSD everything, including root file system and boot directory, can be on RAIDZ. -- Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111218/781b5ed1/attachment.bin>
Nathan Kroenert
2011-Dec-18 22:08 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Do note, that though Frank is correct, you have to be a little careful around what might happen should you drop your original disk, and only the large mirror half is left... ;) On 12/16/11 07:09 PM, Frank Cusack wrote:> You can just do fdisk to create a single large partition. The > attached mirror doesn''t have to be the same size as the first component. > > On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly <greggwon at gmail.com > <mailto:greggwon at gmail.com>> wrote: > > Cindy, will it ever be possible to just have attach mirror the > surfaces, including the partition tables? I spent an hour today > trying to get a new mirror on my root pool. There was a 250GB > disk that failed. I only had a 1.5TB handy as a replacement. > prtvtoc ... | fmthard does not work in this case and so you have > to do the partitioning by hand, which is just silly to fight with > anyway. > > Gregg > > Sent from my iPhone > > On Dec 15, 2011, at 6:13 PM, Tim Cook <tim at cook.ms > <mailto:tim at cook.ms>> wrote: > >> Do you still need to do the grub install? >> >> On Dec 15, 2011 5:40 PM, "Cindy Swearingen" >> <cindy.swearingen at oracle.com >> <mailto:cindy.swearingen at oracle.com>> wrote: >> >> Hi Anon, >> >> The disk that you attach to the root pool will need an SMI label >> and a slice 0. >> >> The syntax to attach a disk to create a mirrored root pool >> is like this, for example: >> >> # zpool attach rpool c1t0d0s0 c1t1d0s0 >> >> Thanks, >> >> Cindy >> >> On 12/15/11 16:20, Anonymous Remailer (austria) wrote: >> >> >> On Solaris 10 If I install using ZFS root on only one >> drive is there a way >> to add another drive as a mirror later? Sorry if this was >> discussed >> already. I searched the archives and couldn''t find the >> answer. Thank you. >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111219/0510c96c/attachment.html>
Darren J Moffat
2011-Dec-19 10:18 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On 12/18/11 11:52, Pawel Jakub Dawidek wrote:> On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote: >> Hi Anon, >> >> The disk that you attach to the root pool will need an SMI label >> and a slice 0. >> >> The syntax to attach a disk to create a mirrored root pool >> is like this, for example: >> >> # zpool attach rpool c1t0d0s0 c1t1d0s0 > > BTW. Can you, Cindy, or someone else reveal why one cannot boot from > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code > would have to be licensed under GPL as the rest of the boot code? > > I''m asking, because I see no technical problems with this functionality. > Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev > pools works just fine on FreeBSD for a long time now. Not being forced > to have dedicated pool just for the root if you happen to have more than > two disks in you box is very convenient.For those of us not familiar with how FreeBSD is installed and boots can you explain how boot works (ie do you use GRUB at all and if so which version and where the early boot ZFS code is). -- Darren J Moffat
Pawel Jakub Dawidek
2011-Dec-19 12:58 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Mon, Dec 19, 2011 at 10:18:05AM +0000, Darren J Moffat wrote:> On 12/18/11 11:52, Pawel Jakub Dawidek wrote: > > On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote: > >> Hi Anon, > >> > >> The disk that you attach to the root pool will need an SMI label > >> and a slice 0. > >> > >> The syntax to attach a disk to create a mirrored root pool > >> is like this, for example: > >> > >> # zpool attach rpool c1t0d0s0 c1t1d0s0 > > > > BTW. Can you, Cindy, or someone else reveal why one cannot boot from > > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code > > would have to be licensed under GPL as the rest of the boot code? > > > > I''m asking, because I see no technical problems with this functionality. > > Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev > > pools works just fine on FreeBSD for a long time now. Not being forced > > to have dedicated pool just for the root if you happen to have more than > > two disks in you box is very convenient. > > For those of us not familiar with how FreeBSD is installed and boots can > you explain how boot works (ie do you use GRUB at all and if so which > version and where the early boot ZFS code is).We don''t use GRUB, no. We use three stages for booting. Stage 0 is bascially 512 byte of very simple MBR boot loader installed at the begining of the disk that is used to launch stage 1 boot loader. Stage 1 is where we interpret all ZFS (or UFS) structure and read real files. When you use GPT, there is dedicated partition (of type freebsd-boot) where you install gptzfsboot binary (stage 0 looks for GPT partition of type freebsd-boot, loads it and starts the code in there). This partition doesn''t contain file system of course, boot0 is too simple to read any file system. The gptzfsboot is where we handle all ZFS-related operations. gptzfsboot is mostly used to find root dataset and load zfsloader from there. The zfsloader is the last stage in booting. It shares the same ZFS-related code as gptzfsboot (but compiled into separate binary), it loads modules and the kernel and starts it. The zfsloader is stored in /boot/ directory on the root dataset. -- Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111219/33e37453/attachment.bin>
2011-12-19 16:58, Pawel Jakub Dawidek wrote:> On Mon, Dec 19, 2011 at 10:18:05AM +0000, Darren J Moffat wrote: >> For those of us not familiar with how FreeBSD is installed and boots can >> you explain how boot works (ie do you use GRUB at all and if so which >> version and where the early boot ZFS code is). > > We don''t use GRUB, no. We use three stages for booting. Stage 0 is > bascially 512 byte of very simple MBR boot loader installed at the > begining of the disk that is used to launch stage 1 boot loader. Stage 1 > is where we interpret all ZFS (or UFS) structure and read real files. > When you use GPT, there is dedicated partition (of type freebsd-boot) > where you install gptzfsboot binary (stage 0 looks for GPT partition of > type freebsd-boot, loads it and starts the code in there). This > partition doesn''t contain file system of course, boot0 is too simple to > read any file system. The gptzfsboot is where we handle all ZFS-related > operations. gptzfsboot is mostly used to find root dataset and load > zfsloader from there. The zfsloader is the last stage in booting. It > shares the same ZFS-related code as gptzfsboot (but compiled into > separate binary), it loads modules and the kernel and starts it. > The zfsloader is stored in /boot/ directory on the root dataset.Hmm... and is the freebsd-boot partition redundant somehow? Is it mirrored or can be striped over several disks? I was educated that the core problem lies in the system''s required ability to boot off any single device (including volumes of several disks singularly presented by HWRAIDs). This "BIOS boot device" should hold everything that is required and sufficient to go on booting the OS and using disk sets of some more sophisticated redundancy. I gather that in FreeBSD''s case this "self-sufficient" bootloader is small and incurs a small storage overhead, even if cloned to a dozen disks in your array? In this case Solaris''s problem with only-mirrored ZFS on root pools is that the "self-sufficient" quantum of required data is much larger; but otherwise the situation is the same? Thanks for clarifying, //Jim
Cindy Swearingen
2011-Dec-19 16:01 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Hi Pawel, In addition to the current SMI label requirement for booting, I believe another limitation is that the boot info must be contiguous. I think an RFE is filed to relax this requirement as well. I just can''t find it right now. Thanks, Cindy On 12/18/11 04:52, Pawel Jakub Dawidek wrote:> On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote: >> Hi Anon, >> >> The disk that you attach to the root pool will need an SMI label >> and a slice 0. >> >> The syntax to attach a disk to create a mirrored root pool >> is like this, for example: >> >> # zpool attach rpool c1t0d0s0 c1t1d0s0 > > BTW. Can you, Cindy, or someone else reveal why one cannot boot from > RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code > would have to be licensed under GPL as the rest of the boot code? > > I''m asking, because I see no technical problems with this functionality. > Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev > pools works just fine on FreeBSD for a long time now. Not being forced > to have dedicated pool just for the root if you happen to have more than > two disks in you box is very convenient. >
Daugherity, Andrew W
2011-Dec-19 17:03 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Does "current" include sol10u10 as well as sol11? If so, when did that go in? Was it in sol10u9? Thanks, Andrew From: Cindy Swearingen <cindy.swearingen at oracle.com<mailto:cindy.swearingen at oracle.com>> Subject: Re: [zfs-discuss] Can I create a mirror for a root rpool? Date: December 16, 2011 10:38:21 AM CST To: Tim Cook <tim at cook.ms<mailto:tim at cook.ms>> Cc: <zfs-discuss at opensolaris.org<mailto:zfs-discuss at opensolaris.org>> Hi Tim, No, in current Solaris releases the boot blocks are installed automatically with a zpool attach operation on a root pool. Thanks, Cindy On 12/15/11 17:13, Tim Cook wrote: Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com<mailto:cindy.swearingen at oracle.com> <mailto:cindy.swearingen at oracle.com>> wrote: Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 Thanks, Cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111219/cb739c00/attachment.html>
Cindy Swearingen
2011-Dec-19 17:29 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
Hi Andrew, Current releases that apply the bootblocks automatically during a zpool attach operation are Oracle Solaris 10 8/11 and Oracle Solaris 11. Thanks, Cindy On 12/19/11 10:03, Daugherity, Andrew W wrote:> Does "current" include sol10u10 as well as sol11? If so, when did that > go in? Was it in sol10u9? > > > Thanks, > > Andrew > >> *From: *Cindy Swearingen <cindy.swearingen at oracle.com >> <mailto:cindy.swearingen at oracle.com>> >> *Subject: **Re: [zfs-discuss] Can I create a mirror for a root rpool?* >> *Date: *December 16, 2011 10:38:21 AM CST >> *To: *Tim Cook <tim at cook.ms <mailto:tim at cook.ms>> >> *Cc: *<zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org>> >> >> >> Hi Tim, >> >> No, in current Solaris releases the boot blocks are installed >> automatically with a zpool attach operation on a root pool. >> >> Thanks, >> >> Cindy >> >> On 12/15/11 17:13, Tim Cook wrote: >>> Do you still need to do the grub install? >>> >>> On Dec 15, 2011 5:40 PM, "Cindy Swearingen" >>> <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com> >>> <mailto:cindy.swearingen at oracle.com>> wrote: >>> >>> Hi Anon, >>> >>> The disk that you attach to the root pool will need an SMI label >>> and a slice 0. >>> >>> The syntax to attach a disk to create a mirrored root pool >>> is like this, for example: >>> >>> # zpool attach rpool c1t0d0s0 c1t1d0s0 >>> >>> Thanks, >>> >>> Cindy > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2011-Dec-20 00:29:50 +1100, Jim Klimov <jimklimov at cos.ru> wrote:>2011-12-19 16:58, Pawel Jakub Dawidek wrote: >> On Mon, Dec 19, 2011 at 10:18:05AM +0000, Darren J Moffat wrote: >>> For those of us not familiar with how FreeBSD is installed and boots can >>> you explain how boot works (ie do you use GRUB at all and if so which >>> version and where the early boot ZFS code is). >> >> We don''t use GRUB, no. We use three stages for booting. Stage 0 is >> bascially 512 byte of very simple MBR boot loader installed at the >> begining of the disk that is used to launch stage 1 boot loader. Stage 1 >> is where we interpret all ZFS (or UFS) structure and read real files....>Hmm... and is the freebsd-boot partition redundant somehow?In the GPT case, each boot device would have a copy of both the boot0 MBR and a freebsd-boot partition containing gptzfsboot. Both zfsboot (used with traditional MBR/fdisk partitioning) and gptzfsboot incorporate standard ZFS code and so should be able to boot off any supported zpool type (but note that there''s a bug in the handling of gang blocks that was only fixed very recently).>Is it mirrored or can be striped over several disks?Effectively the boot code is mirrored on each bootdisk. FreeBSD does not have the same partitioned vs whole disk issues as Solaris so there is no downside to using partitioned disks with ZFS on FreeBSD.>I was educated that the core problem lies in the system''s >required ability to boot off any single device (including >volumes of several disks singularly presented by HWRAIDs). >This "BIOS boot device" should hold everything that is >required and sufficient to go on booting the OS and using >disk sets of some more sophisticated redundancy.Normally, firmware boot code (BIOS, EFI, OFW etc) has no RAID ability and needs to load bootstrap code off a single (physical or HW RAID) boot device. The exception is the primitive software RAID solutions found in consumer PC hardware - which are best ignored. Effectively, all the code needed prior to the point where a software RAID device can be built must be replicated in full across all boot devices. For RAID-1, everything is already replicated so it''s sufficient to just treat one mirror as the boot device and let the kernel build the RAID device. For anything more complex, one of the bootstrap stages has to build enough of the RAID device to allow the kernel (etc) to be read out of the RAID device.>I gather that in FreeBSD''s case this "self-sufficient" >bootloader is small and incurs a small storage overhead, >even if cloned to a dozen disks in your array?gptzfsboot is currently ~34KB (20KB larger than the equivalent UFS bootstrap). GPT has a 34-sector overhead and the freebsd-boot partition is typically 128 sectors to allow for future growth (though I''ve shrunk it at home to 94 sectors so the following partition is on a 64KB boundary to better suit future 4KB disks). My mirrored ZFS system at work is partitioned as: $ gpart show -p => 34 78124933 ad0 GPT (37G) 34 128 ad0p1 freebsd-boot (64k) 162 5242880 ad0p2 freebsd-swap (2.5G) 5243042 72881925 ad0p3 freebsd-zfs (34G) => 34 78124933 ad1 GPT (37G) 34 128 ad1p1 freebsd-boot (64k) 162 5242880 ad1p2 freebsd-swap (2.5G) 5243042 72881925 ad1p3 freebsd-zfs (34G) (The first 2 columns are absolute offset and size in sectors) My root pool is a mirror of ad0p3 and ad1p3.>In this case Solaris''s problem with only-mirrored ZFS >on root pools is that the "self-sufficient" quantum >of required data is much larger; but otherwise the >situation is the same?If you have enough data and disk space, the overheads in combining a mirrored root with RAIDZ data aren''t that great. At home, I have 6 1TB disks and I''ve carved out 8GB from the front of each (3GB for swap and 5GB for root) and the remainder in a RAIDZ2 pool - that''s less than 1% overhead. 5GB is big enough to hold the complete source tree and compile it, as well as the base OS. I have a 3-way mirrored root across half the disks and use the other "root" partitions as "temporary" roots when upgrading. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111220/15198ed8/attachment.bin>
Gregg Wonderly
2011-Dec-19 23:55 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
That''s why I''m asking. I think it should always mirror the partition table and allocate exactly the same amount of space so that the pool doesn''t suddenly change sizes unexpectedly and require a disk size that I don''t have at hand, to put the mirror back up. Gregg On 12/18/2011 4:08 PM, Nathan Kroenert wrote:> Do note, that though Frank is correct, you have to be a little careful around > what might happen should you drop your original disk, and only the large > mirror half is left... ;) > > On 12/16/11 07:09 PM, Frank Cusack wrote: >> You can just do fdisk to create a single large partition. The attached >> mirror doesn''t have to be the same size as the first component. >> >> On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly <greggwon at gmail.com >> <mailto:greggwon at gmail.com>> wrote: >> >> Cindy, will it ever be possible to just have attach mirror the surfaces, >> including the partition tables? I spent an hour today trying to get a >> new mirror on my root pool. There was a 250GB disk that failed. I only >> had a 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work >> in this case and so you have to do the partitioning by hand, which is >> just silly to fight with anyway. >> >> Gregg >> >> Sent from my iPhone >> >> On Dec 15, 2011, at 6:13 PM, Tim Cook <tim at cook.ms <mailto:tim at cook.ms>> >> wrote: >> >>> Do you still need to do the grub install? >>> >>> On Dec 15, 2011 5:40 PM, "Cindy Swearingen" <cindy.swearingen at oracle.com >>> <mailto:cindy.swearingen at oracle.com>> wrote: >>> >>> Hi Anon, >>> >>> The disk that you attach to the root pool will need an SMI label >>> and a slice 0. >>> >>> The syntax to attach a disk to create a mirrored root pool >>> is like this, for example: >>> >>> # zpool attach rpool c1t0d0s0 c1t1d0s0 >>> >>> Thanks, >>> >>> Cindy >>> >>> On 12/15/11 16:20, Anonymous Remailer (austria) wrote: >>> >>> >>> On Solaris 10 If I install using ZFS root on only one drive is >>> there a way >>> to add another drive as a mirror later? Sorry if this was discussed >>> already. I searched the archives and couldn''t find the answer. >>> Thank you. >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111219/00176767/attachment.html>
If you don''t detach the smaller drive, the pool size won''t increase. Even if the remaining smaller drive fails, that doesn''t mean you have to detach it. So yes, the pool size might increase, but it won''t be "unexpectedly". It will be because you detached all smaller drives. Also, even if a smaller drive is failed, it can still be attached. It doesn''t make sense for attach to do anything with partition tables, IMHO. I *always* order the spare when I order the original drives, to have it on hand, even for my home system. Drive sizes change more frequently than they fail, for me. Sure, when I use the spare I may not be able to order a new spare of the same size, but at least at that time I have time to prepare and am not scrambling. On Mon, Dec 19, 2011 at 3:55 PM, Gregg Wonderly <greggwon at gmail.com> wrote:> That''s why I''m asking. I think it should always mirror the partition > table and allocate exactly the same amount of space so that the pool > doesn''t suddenly change sizes unexpectedly and require a disk size that I > don''t have at hand, to put the mirror back up. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111219/8cdeb382/attachment.html>
Fajar A. Nugraha
2011-Dec-20 03:10 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On Tue, Dec 20, 2011 at 9:51 AM, Frank Cusack <frank at linetwo.net> wrote:> If you don''t detach the smaller drive, the pool size won''t increase.? Even > if the remaining smaller drive fails, that doesn''t mean you have to detach > it.? So yes, the pool size might increase, but it won''t be "unexpectedly". > It will be because you detached all smaller drives.? Also, even if a smaller > drive is failed, it can still be attached.Isn''t autoexpand=off by default, so it won''t use the larger size anyway? -- Fajar
Gregg Wonderly
2011-Dec-20 22:11 UTC
[zfs-discuss] Can I create a mirror for a root rpool?
On 12/19/2011 8:51 PM, Frank Cusack wrote:> If you don''t detach the smaller drive, the pool size won''t increase. Even if > the remaining smaller drive fails, that doesn''t mean you have to detach it. > So yes, the pool size might increase, but it won''t be "unexpectedly". It will > be because you detached all smaller drives. Also, even if a smaller drive is > failed, it can still be attached.If you don''t have a controller slot to connect the replacement drive through, then you have to remove the smaller drive, physically. You can, then attach the replacement drive, but will "replace" work then, or must you remove and then add it because it is "the same disk"?> It doesn''t make sense for attach to do anything with partition tables, IMHO.I understand that in some cases, it might be more problematic for attach to "assume" some things about partitioning. I don''t know that I have "the answer", but I know, from experience, that there is nothing I hate more than anything, then having to "figure out" how to partition disks on Solaris. It''s just too painful to have so many steps with conditions of use.> I *always* order the spare when I order the original drives, to have it on > hand, even for my home system. Drive sizes change more frequently than they > fail, for me. Sure, when I use the spare I may not be able to order a new > spare of the same size, but at least at that time I have time to prepare and > am not scrambling.Most of the time, I have spares ready too. I have returned 4 of one manufactures, and 2 of another, with 2 more disks showing signs of "failure". These are all SATA disks on my home server. At this point, with drive prices so high, it''s not "simple" to pick up a couple of more spares to have on hand. For my Root pool, I had only no remaining 250GB disks that I''ve been using for root. So, I put in one of my 1.5TB spares for the moment, until I decide whether or not to order a new small drive.> On Mon, Dec 19, 2011 at 3:55 PM, Gregg Wonderly <greggwon at gmail.com > <mailto:greggwon at gmail.com>> wrote: > > That''s why I''m asking. I think it should always mirror the partition > table and allocate exactly the same amount of space so that the pool > doesn''t suddenly change sizes unexpectedly and require a disk size that I > don''t have at hand, to put the mirror back up. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111220/c324d2d6/attachment.html>
On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly <greggwon at gmail.com> wrote:> On 12/19/2011 8:51 PM, Frank Cusack wrote: > > If you don''t detach the smaller drive, the pool size won''t increase. Even > if the remaining smaller drive fails, that doesn''t mean you have to detach > it. So yes, the pool size might increase, but it won''t be "unexpectedly". > It will be because you detached all smaller drives. Also, even if a > smaller drive is failed, it can still be attached. > > If you don''t have a controller slot to connect the replacement drive > through, then you have to remove the smaller drive, physically. >Physically, yes. By detach, I meant ''zfs detach'', a logical operation. You can, then attach the replacement drive, but will "replace" work then,> or must you remove and then add it because it is "the same disk"? >I was thinking that you leave the failed drive [logically] attached. So, you don''t ''zfs replace'', you just ''zfs attach'' your new drive. Yes, this leaves the mirror in faulted condition. You''d correct that later when you get a replacement smaller drive. But, as Fajar noted, just make sure autoexpand is off and you can still do a ''zfs replace'' operation if you like (perhaps so your monitoring shuts up) and the pool size will not unexpectedly grow. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111220/5dd187cf/attachment.html>
Of course I meant ''zpool *'' not ''zfs *'' below. On Tue, Dec 20, 2011 at 4:27 PM, Frank Cusack <frank at linetwo.net> wrote:> On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly <greggwon at gmail.com>wrote: > >> On 12/19/2011 8:51 PM, Frank Cusack wrote: >> >> If you don''t detach the smaller drive, the pool size won''t increase. >> Even if the remaining smaller drive fails, that doesn''t mean you have to >> detach it. So yes, the pool size might increase, but it won''t be >> "unexpectedly". It will be because you detached all smaller drives. Also, >> even if a smaller drive is failed, it can still be attached. >> >> If you don''t have a controller slot to connect the replacement drive >> through, then you have to remove the smaller drive, physically. >> > > Physically, yes. By detach, I meant ''zfs detach'', a logical operation. > > You can, then attach the replacement drive, but will "replace" work >> then, or must you remove and then add it because it is "the same disk"? >> > > I was thinking that you leave the failed drive [logically] attached. So, > you don''t ''zfs replace'', you just ''zfs attach'' your new drive. Yes, this > leaves the mirror in faulted condition. You''d correct that later when you > get a replacement smaller drive. > > But, as Fajar noted, just make sure autoexpand is off and you can still do > a ''zfs replace'' operation if you like (perhaps so your monitoring shuts up) > and the pool size will not unexpectedly grow. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111220/159e1e2a/attachment-0001.html>