On Sun, Jan 29, 2017 at 03:15:19PM +1100, Aristedes Maniatis wrote:> As recently as last October, the best official advice was to make a 64kB boot partition. > > https://wiki.freebsd.org/action/diff/RootOnZFS/GPTZFSBoot/Mirror?action=diff&rev1=16&rev2=17 > > > Now that turns out to be absolutely terrible advice and some people (like me) have dozens of machines that will never be upgradable to FreeBSD 11 or higher. It looks like there is no reasonable method of upgrade that doesn't involve replacing every hard disk on every machine (that's hundred of disks) with larger models. I use a zvol for swap, so I can't make swap smaller to solve the problem. > > I started with FreeBSD 4.1 and in 16 years... sigh... > > The ashift pain some years ago was also caused by FreeBSD default recommendations and settings not anticipating future needs quickly enough. But this mess now is completely self-inflicted foot shooting. > > > 1. Why is the recommendation now 128kB and not much much higher? When that limit is broken in a couple of years, will there be another round of annoyed users? Is someone concerned that ZFS users are running hard disks over under 500Mb and need to save space? Surely the recommendation should be 512kB? > > 2. Is there any possible short term future where ZFS volumes can be shrunk, or will I be replacing every hard disk (or rebuilding the machine from scratch)?It is highly unlikely that ZFS volumes will be able to be reduced in size even in the long term. I believe that requires a piece of work that has been rated as very difficult to do without violating layering policies inside the ZFS code. The alternative is, assuming you have a pool with redundancy (e.g. mirror) is to do a backup, drop one half of the mirror, create a new pool on the now unused disk, zfs send | zfs receive, boot from the new pool and then drop the old pool and add the disk to the mirror It's a pain and a bit of a shuffle but it's possible. I did it on my server once when I found that FreeBSD 9 didn't detect the disks as 4k and the alignment was all wrong. I worked through the procedure in a VM to validate it first, but found that in production I'd managed to hard code the boot pool name in /boot/loader.conf which meant that it didn't reboot and use the bootfs flag on the pool, it just sat at the "Cannot mount root" prompt. Took me a while to find that loader.conf setting and kill it. Regards, Gary> > 3. Is there any possibility of getting a gptzfsboot which is 64kB but missing certain features I might not need? eg. a RAIDZ2 version that skips support for RAIDZ3 > > 4. Will support be added to freebsd-update to warn users BEFORE they try to upgrade and kill their system? > > > > Please cc me, I'm not subscribed. > > > Ari Maniatis > > > -- > --------------------------> > Aristedes Maniatis > CEO, ish > https://www.ish.com.au > GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A >
On Jan 29, 2017 6:13 AM, "Gary Palmer" <gpalmer at freebsd.org> wrote: On Sun, Jan 29, 2017 at 03:15:19PM +1100, Aristedes Maniatis wrote:> As recently as last October, the best official advice was to make a 64kBboot partition.> > https://wiki.freebsd.org/action/diff/RootOnZFS/GPTZFSBoot/Mirror?action=diff&rev1=16&rev2=17> > > Now that turns out to be absolutely terrible advice and some people (likeme) have dozens of machines that will never be upgradable to FreeBSD 11 or higher. It looks like there is no reasonable method of upgrade that doesn't involve replacing every hard disk on every machine (that's hundred of disks) with larger models. I use a zvol for swap, so I can't make swap smaller to solve the problem.> > I started with FreeBSD 4.1 and in 16 years... sigh... > > The ashift pain some years ago was also caused by FreeBSD defaultrecommendations and settings not anticipating future needs quickly enough. But this mess now is completely self-inflicted foot shooting.> > > 1. Why is the recommendation now 128kB and not much much higher? Whenthat limit is broken in a couple of years, will there be another round of annoyed users? Is someone concerned that ZFS users are running hard disks over under 500Mb and need to save space? Surely the recommendation should be 512kB?> > 2. Is there any possible short term future where ZFS volumes can beshrunk, or will I be replacing every hard disk (or rebuilding the machine from scratch)? It is highly unlikely that ZFS volumes will be able to be reduced in size even in the long term. I believe that requires a piece of work that has been rated as very difficult to do without violating layering policies inside the ZFS code. The alternative is, assuming you have a pool with redundancy (e.g. mirror) is to do a backup, drop one half of the mirror, create a new pool on the now unused disk, zfs send | zfs receive, boot from the new pool and then drop the old pool and add the disk to the mirror You can also format a larger drive with the correct partition sizes, and do a "zpool replace" (for raidz vdevs) or "zpool detach/attach" (for mirror vdevs). No send/recv required. And, you may be able to do that on the existing disks, as ZFS now leaves a MB or two of "slack space" at the end of the device used in the vdev. This allows for using drives/partitions that are the same size in MB but have different numbers of sectors. This was an issue on the early ZFS days. So, you may be able to resize the freebsd-zfs partition by a handful of KB without actually changing the size of the vdev. Cheers, Freddie