Michelle Sullivan http://www.mhix.org/ Sent from my iPad> On 07 May 2019, at 10:53, Paul Mather <paul at gromit.dlib.vt.edu> wrote: > >> On May 6, 2019, at 10:14 AM, Michelle Sullivan <michelle at sorbs.net> wrote: >> >> My issue here (and not really what the blog is about) FreeBSD is defaulting to it. > > You've said this at least twice now in this thread so I'm assuming you're asserting it to be true. > > As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does NOT default to ZFS. > > The images distributed by freebsd.org, e.g., Vagrant boxes, ARM images, EC2 instances, etc., contain disk images where FreeBSD resides on UFS. For example, here's what you end up with when you launch a 12.0-RELEASE instance using defaults on AWS (us-east-1 region: ami-03b0f822e17669866): > > root at freebsd:/usr/home/ec2-user # gpart show > => 3 20971509 ada0 GPT (10G) > 3 123 1 freebsd-boot (62K) > 126 20971386 2 freebsd-ufs (10G) > > And this is what you get when you "vagrant up" the freebsd/FreeBSD-12.0-RELEASE box: > > root at freebsd:/home/vagrant # gpart show > => 3 65013755 ada0 GPT (31G) > 3 123 1 freebsd-boot (62K) > 126 2097152 2 freebsd-swap (1.0G) > 2097278 62914560 3 freebsd-ufs (30G) > 65011838 1920 - free - (960K) > > > When you install from the 12.0-RELEASE ISO, the first option listed during the partitioning stage is "Auto (UFS) Guided Disk Setup". The last option listed---after "Open a shell and partition by hand" is "Auto (ZFS) Guided Root-on-ZFS". In other words, you have to skip over UFS and manual partitioning to select the ZFS install option. > > So, I don't see what evidence there is that FreeBSD is defaulting to ZFS. It hasn't up to now. Will FreeBSD 13 default to ZFS? >Umm.. well I install by memory stick images and I had a 10.2 and an 11.0 both of which had root on zfs as the default.. I had to manually change them. I haven?t looked at anything later... so did something change? Am I in cloud cookoo land?> >> FreeBSD used to be targeted at enterprise and devs (which is where I found it)... however the last few years have been a big push into the consumer (compete with Linux) market.. so you have an OS that concerns itself with the desktop and upgrade after upgrade after upgrade (not just patching security issues, but upgrades as well.. just like windows and OSX)... I get it.. the money is in the keeping of the user base.. but then you install a file system which is dangerous on a single disk by default... dangerous because it?s trusted and ?can?t fail? .. until it goes titsup.com and then the entire drive is lost and all the data on it.. it?s the double standard... advocate you need ECC ram, multiple vdevs etc, then single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh! > > > As people have pointed out elsewhere in this thread, it's false to claim that ZFS is unsafe on consumer hardware. It's no less safe than UFS on single-disk setups. > > Because anecdote is not evidence, I will refrain from saying, "I've lost far more data on UFS than I have on ZFS (especially when SUJ was shaking out its bugs)..." >;-) > > What I will agree with is that, probably due to its relative youth, ZFS has less forensics/data recovery tools than UFS. I'm sure this will improve as time goes on. (I even posted a link to an article describing someone adding ZFS support to a forensics toolkit earlier in this thread.)The problem I see with that statement is that the zfs dev mailing lists constantly and consistently following the line of, the data is always right there is no need for a ?fsck? (which I actually get) but it?s used to shut down every thread... the irony is I?m now installing windows 7 and SP1 on a usb stick (well it?s actually installed, but sp1 isn?t finished yet) so I can install a zfs data recovery tool which reports to be able to ?walk the data? to retrieve all the files... the irony eh... install windows7 on a usb stick to recover a FreeBSD installed zfs filesystem... will let you know if the tool works, but as it was recommended by a dev I?m hopeful... have another array (with zfs I might add) loaded and ready to go... if the data recovery is successful I?ll blow away the original machine and work out what OS and drive setup will be safe for the data in the future. I might even put FreeBSD and zfs back on it, but if I do it won?t be in the current Zraid2 config.> > Cheers, > > Paul.
On May 7, 2019, at 1:02 AM, Michelle Sullivan <michelle at sorbs.net> wrote:>> On 07 May 2019, at 10:53, Paul Mather <paul at gromit.dlib.vt.edu> wrote: >> >>> On May 6, 2019, at 10:14 AM, Michelle Sullivan <michelle at sorbs.net> >>> wrote: >>> >>> My issue here (and not really what the blog is about) FreeBSD is >>> defaulting to it. >> >> You've said this at least twice now in this thread so I'm assuming >> you're asserting it to be true. >> >> As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does NOT >> default to ZFS. >> >> The images distributed by freebsd.org, e.g., Vagrant boxes, ARM images, >> EC2 instances, etc., contain disk images where FreeBSD resides on UFS. >> For example, here's what you end up with when you launch a 12.0-RELEASE >> instance using defaults on AWS (us-east-1 region: ami-03b0f822e17669866): >> >> root at freebsd:/usr/home/ec2-user # gpart show >> => 3 20971509 ada0 GPT (10G) >> 3 123 1 freebsd-boot (62K) >> 126 20971386 2 freebsd-ufs (10G) >> >> And this is what you get when you "vagrant up" the >> freebsd/FreeBSD-12.0-RELEASE box: >> >> root at freebsd:/home/vagrant # gpart show >> => 3 65013755 ada0 GPT (31G) >> 3 123 1 freebsd-boot (62K) >> 126 2097152 2 freebsd-swap (1.0G) >> 2097278 62914560 3 freebsd-ufs (30G) >> 65011838 1920 - free - (960K) >> >> >> When you install from the 12.0-RELEASE ISO, the first option listed >> during the partitioning stage is "Auto (UFS) Guided Disk Setup". The >> last option listed---after "Open a shell and partition by hand" is "Auto >> (ZFS) Guided Root-on-ZFS". In other words, you have to skip over UFS >> and manual partitioning to select the ZFS install option. >> >> So, I don't see what evidence there is that FreeBSD is defaulting to >> ZFS. It hasn't up to now. Will FreeBSD 13 default to ZFS? > > Umm.. well I install by memory stick images and I had a 10.2 and an 11.0 > both of which had root on zfs as the default.. I had to manually change > them. I haven?t looked at anything later... so did something change? Am > I in cloud cookoo land?I don't know about that, but you may well be misremembering. I just pulled down the 10.2 and 11.0 installers from http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in both cases the choices listed in the "Partitioning" step are the same as in the current 12.0 installer: "Auto (UFS) Guided Disk Setup" is listed first and selected by default. "Auto (ZFS) Guided Root-on-ZFS" is listed last (you have to skip past other options such as manually partitioning by hand to select it). I'm confident in saying that ZFS is (or was) not the default partitioning option in either 10.2 or 11.0 as officially released by FreeBSD. Did you use a custom installer you made yourself when installing 10.2 or 11.0?> >>> FreeBSD used to be targeted at enterprise and devs (which is where I >>> found it)... however the last few years have been a big push into the >>> consumer (compete with Linux) market.. so you have an OS that concerns >>> itself with the desktop and upgrade after upgrade after upgrade (not >>> just patching security issues, but upgrades as well.. just like windows >>> and OSX)... I get it.. the money is in the keeping of the user base.. >>> but then you install a file system which is dangerous on a single disk >>> by default... dangerous because it?s trusted and ?can?t fail? .. until >>> it goes titsup.com and then the entire drive is lost and all the data >>> on it.. it?s the double standard... advocate you need ECC ram, >>> multiple vdevs etc, then single drive it.. sorry.. which one is it? >>> Gaaaaaarrrrrrrgggghhhhhhh! >> >> >> As people have pointed out elsewhere in this thread, it's false to claim >> that ZFS is unsafe on consumer hardware. It's no less safe than UFS on >> single-disk setups. >> >> Because anecdote is not evidence, I will refrain from saying, "I've lost >> far more data on UFS than I have on ZFS (especially when SUJ was shaking >> out its bugs)..." >;-) >> >> What I will agree with is that, probably due to its relative youth, ZFS >> has less forensics/data recovery tools than UFS. I'm sure this will >> improve as time goes on. (I even posted a link to an article describing >> someone adding ZFS support to a forensics toolkit earlier in this >> thread.) > > The problem I see with that statement is that the zfs dev mailing lists > constantly and consistently following the line of, the data is always > right there is no need for a ?fsck? (which I actually get) but it?s used > to shut down every thread... the irony is I?m now installing windows 7 > and SP1 on a usb stick (well it?s actually installed, but sp1 isn?t > finished yet) so I can install a zfs data recovery tool which reports to > be able to ?walk the data? to retrieve all the files... the irony eh... > install windows7 on a usb stick to recover a FreeBSD installed zfs > filesystem... will let you know if the tool works, but as it was > recommended by a dev I?m hopeful... have another array (with zfs I might > add) loaded and ready to go... if the data recovery is successful I?ll > blow away the original machine and work out what OS and drive setup will > be safe for the data in the future. I might even put FreeBSD and zfs > back on it, but if I do it won?t be in the current Zraid2 config.There is no more irony in installing a data recovery tool to recover a trashed ZFS pool than there is in installing one to recover a trashed UFS file system. No file system is bulletproof, which is why everyone I know recommends a backup/disaster recovery strategy commensurate with the value you place on your data. There WILL be some combination of events that will lead to irretrievable data loss. Your extraordinary sequence of mishaps apparently met the threshold for ZFS on your setup. I don't see how any of this leads to the conclusion that ZFS is "dangerous" to use as a file system. What I believe is dangerous is relying on a post-mortem crash data recovery methodology as a substitute for a backup strategy for data that, in hindsight, is considered important enough to keep. No matter how resilient ZFS or UFS may be, they are no substitute for backups when it comes to data you care about. (File system resiliency will not protect you, e.g., from Ransomware or other malicious or accidental acts of data destruction.) Cheers, Paul.
On 5/7/2019 00:02, Michelle Sullivan wrote:> The problem I see with that statement is that the zfs dev mailing lists constantly and consistently following the line of, the data is always right there is no need for a ?fsck? (which I actually get) but it?s used to shut down every thread... the irony is I?m now installing windows 7 and SP1 on a usb stick (well it?s actually installed, but sp1 isn?t finished yet) so I can install a zfs data recovery tool which reports to be able to ?walk the data? to retrieve all the files... the irony eh... install windows7 on a usb stick to recover a FreeBSD installed zfs filesystem... will let you know if the tool works, but as it was recommended by a dev I?m hopeful... have another array (with zfs I might add) loaded and ready to go... if the data recovery is successful I?ll blow away the original machine and work out what OS and drive setup will be safe for the data in the future. I might even put FreeBSD and zfs back on it, but if I do it won?t be in the current Zraid2 config.Meh. Hardware failure is, well, hardware failure.? Yes, power-related failures are hardware failures. Never mind the potential for /software /failures.? Bugs are, well, bugs.? And they're a real thing.? Never had the shortcomings of UFS bite you on an "unexpected" power loss?? Well, I have.? Is ZFS absolutely safe against any such event?? No, but it's safe*r*. I've yet to have ZFS lose an entire pool due to something bad happening, but the same basic risk (entire filesystem being gone) has occurred more than once in my IT career with other filesystems -- including UFS, lowly MSDOS and NTFS, never mind their predecessors all the way back to floppy disks and the first 5Mb Winchesters.? I learned a long time ago that two is one and one is none when it comes to data, and WHEN two becomes one you SWEAT, because that second failure CAN happen at the worst possible time. As for RaidZ2 .vs. mirrored it's not as simple as you might think.? Mirrored vdevs can only lose one member per mirror set, unless you use three-member mirrors.? That sounds insane but actually it isn't in certain circumstances, such as very-read-heavy and high-performance-read environments. The short answer is that a 2-way mirrored set is materially faster on reads but has no acceleration on writes, and can lose one member per mirror.? If the SECOND one fails before you can resilver, and that resilver takes quite a long while if the disks are large, you're dead.? However, if you do six drives as a 2x3 way mirror (that is, 3 vdevs each of a 2-way mirror) you now have three parallel data paths going at once and potentially six for reads -- and performance is MUCH better.? A 3-way mirror can lose two members (and could be organized as 3x2) but obviously requires lots of drive slots, 3x as much *power* per gigabyte stored (and you pay for power twice; once to buy it and again to get the heat out of the room where the machine is.) Raidz2 can also lose 2 drives without being dead.? However, it doesn't get any of the read performance improvement *and* takes a write performance penalty; Z2 has more write penalty than Z1 since it has to compute and write two parity entries instead of one, although in theory at least it can parallel those parity writes -- albeit at the cost of drive bandwidth congestion (e.g. interfering with other accesses to the same disk at the same time.)? In short RaidZx performs about as "well" as the *slowest* disk in the set.? So why use it (particularly Z2) at all?? Because for "N" drives you get the protection of a 3-way mirror and *much* more storage.? A six-member RaidZ2 setup returns ~4Tb of usable space, where with a 2-way mirror it returns 3Tb and a 3-way mirror (which provides the same protection against drive failure as Z2) you have only *half* the storage.? IMHO ordinary Raidz isn't worth the trade-offs, but Z2 frequently is. In addition more spindles means more failures, all other things being equal, so if you need "X" TB of storage and organize it as 3-way mirrors you now have twice as many physical spindles which means on average you'll take twice as many faults.? If performance is more important then the choice is obvious.? If density is more important (that is, a lot or even most of the data is rarely accessed at all) then the choice is fairly simple too.? In many workloads you have some of both, and thus the correct choice is a hybrid arrangement; that's what I do here, because I have a lot of data that is rarely-to-never accessed and read-only but also have some data that is frequently accessed and frequently written.? One size does not fit all in such a workload. MOST systems, by the way, have this sort of paradigm (a huge percentage of the data is rarely read and never written) but it doesn't become economic or sane to try to separate them until you get well into the terabytes of storage range and a half-dozen or so physical volumes.? There's a? very clean argument that prior to that point but with greater than one drive mirrored is always the better choice. Note that if you have an *adapter* go insane (and as I've noted here I've had it happen TWICE in my IT career!) then *all* of the data on the disks served by that adapter is screwed. It doesn't make a bit of difference what filesystem you're using in that scenario and thus you had better have a backup scheme and make sure it works as well, never mind software bugs or administrator stupidity ("dd" as root to the wrong target, for example, will reliably screw you every single time!) For a single-disk machine ZFS is no *less* safe than UFS and provides a number of advantages, with arguably the most-important being easily-used snapshots.? Not only does this simplify backups since coherency during the backup is never at issue and incremental backups become fast and easily-done in addition boot environments make roll-forward and even *roll-back* reasonable to implement for software updates -- a critical capability if you ever run an OS version update and something goes seriously wrong with it.? If you've never had that happen then consider yourself blessed; it's NOT fun to manage in a UFS environment and often winds up leading to a "restore from backup" scenario.? (To be fair it can be with ZFS too if you're foolish enough to upgrade the pool before being sure you're happy with the new OS rev.) -- Karl Denninger karl at denninger.net <mailto:karl at denninger.net> /The Market Ticker/ /[S/MIME encrypted email preferred]/ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4897 bytes Desc: S/MIME Cryptographic Signature URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20190507/3bc8a08d/attachment.bin>