On May 7, 2019, at 8:25 PM, Michelle Sullivan <michelle at sorbs.net>
wrote:
> Paul Mather wrote:
>> On May 7, 2019, at 1:02 AM, Michelle Sullivan <michelle at
sorbs.net> wrote:
[[...]]>>
>>>
>>> Umm.. well I install by memory stick images and I had a 10.2 and an
>>> 11.0 both of which had root on zfs as the default.. I had to
manually
>>> change them. I haven?t looked at anything later... so did
something
>>> change? Am I in cloud cookoo land?
>>
>>
>> I don't know about that, but you may well be misremembering. I
just
>> pulled down the 10.2 and 11.0 installers from
>> http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in
>> both cases the choices listed in the "Partitioning" step are
the same as
>> in the current 12.0 installer: "Auto (UFS) Guided Disk
Setup" is listed
>> first and selected by default. "Auto (ZFS) Guided
Root-on-ZFS" is
>> listed last (you have to skip past other options such as manually
>> partitioning by hand to select it).
>>
>> I'm confident in saying that ZFS is (or was) not the default
>> partitioning option in either 10.2 or 11.0 as officially released by
>> FreeBSD.
>>
>> Did you use a custom installer you made yourself when installing 10.2
or
>> 11.0?
>
> it was an emergency USB stick.. so downloaded straight from the website.
>
> My process is boot, select "manual" (so I can set single
partition and a
> swap partition as historically it's done other things) select the whole
> disk and create partition - this is where I saw it... 'freebsd-zfs'
as
> the default. Second 'create' defaults to 'freebsd-swap'
which is always
> correct. Interestingly the -CURRENT installer just says,
"freebsd" and
> not either -ufs or -zfs ... what ever that defaults to I don't know.
I still fail to see from where you are getting the ZFS default idea. Using
the 10.2 installer, for example, when you select "Manual"
partitioning, and
click through the defaults, the "Type" you are offered when creating
the
first file system is "freebsd-ufs". If you want to edit that, the
help
text says "Filesystem type (e.g. freebsd-ufs, freebsd-zfs,
freebsd-swap)"
(i.e., freebsd-ufs is listed preferentially to freebsd-zfs).
That is all aside from the fact that by choosing to skip past the default
"Auto (UFS) Guided Disk Setup" and choose "Manual Manual
Disk Setup
(experts)" you are choosing an option that assumes you are an
"expert" and
thus are knowledgeable and responsible for the choices you make, whatever
the subsequent menus may offer.
Again, I suggest there's no basis for the allegation that it's bad that
FreeBSD is defaulting to ZFS because that is NOT what it's doing (and
I'm
unaware of any plans for 13 to do so).
>> I don't see how any of this leads to the conclusion that ZFS is
>> "dangerous" to use as a file system.
>
> For me 'dangerous' threshold is when it comes to 'all or
nothing'. UFS -
> even when trashed (and I might add I've never had it completely trashed
> on a production image) there are tools to recover what is left of the
> data. There are no such tools for zfs (barring the one I'm about to
test
> - which will be interesting to see if it works... but even then,
> installing windows to recover freebsd :D )
You're saying that ZFS is dangerous because it has no tools for
catastrophic data recovery... other than the one you are in the process of
trying to use, and the ones that others on this thread have suggested to
you. :-\
I'm having a hard time grappling with this logic.
>
>> What I believe is dangerous is relying on a post-mortem crash data
>> recovery methodology as a substitute for a backup strategy for data
>> that, in hindsight, is considered important enough to keep. No matter
>> how resilient ZFS or UFS may be, they are no substitute for backups
when
>> it comes to data you care about. (File system resiliency will not
>> protect you, e.g., from Ransomware or other malicious or accidental
acts
>> of data destruction.)
>
> True, but nothing is perfect, even backups (how many times have we seen
> or heard of stories when Backups didn't actually work - and the problem
> was only identified when trying to recover from a problem?)
This is the nature of disaster recovery and continuity planning. The
solutions adopted are individualistic and are commensurate with the
anticipated risk/loss. I agree that backups are themselves subject to risk
that must be managed. Yet I don't consider backups "dangerous".
I don't know what the outcome of your risk assessment was and what you
determined to be your RPO and RTO for disaster recovery so I can't comment
whether it was realistic or not. Whatever you chose was based on your
situation, not mine, and it is a choice you have to live with. (Bear in
mind that "not to decide is to decide.")
> My situation has been made worse by the fact I was reorganising
> everything when it went down - so my backups (of the important stuff)
> were not there and that was a direct consequence of me throwing caution
> to the wind years before and stopping keeping the full mirror of the
> data...
I guess, at the time, "throwing caution to the wind" was a risk you
were
prepared to take (as well as accepting the consequences).
> due to lack of space. Interestingly have had another drive die in the
array - and it doesn't just have one or two sectors down it has a *lot* -
which was not noticed by the original machine - I moved the drive to a byte
copier which is where it's reporting 100's of sectors damaged... could
this be compounded by zfs/mfi driver/hba not picking up errors like it should?
Did you have regular pool scrubs enabled? It would have picked up silent
data corruption like this. It does for me.
Cheers,
Paul.