Michelle Sullivan
http://www.mhix.org/
Sent from my iPad
> On 30 Apr 2019, at 17:10, Andrea Venturoli <ml at netfence.it> wrote:
>
>> On 4/30/19 2:41 AM, Michelle Sullivan wrote:
>>
>> The system was originally built on 9.0, and got upgraded through out
the years... zfsd was not available back then. So get your point, but maybe you
didn?t realize this blog was a history of 8+ years?
>
> That's one of the first things I thought about while reading the
original post: what can be inferred from it is that ZFS might not have been that
good in the past.
> It *could* still suffer from the same problems or it *could* have improved
and be more resilient.
> Answering that would be interesting...
>
Without a doubt it has come a long way, but in my opinion, until there is a tool
to walk the data (to transfer it out) or something that can either repair or
invalidate metadata (such as a spacemap corruption) there is still a fatal flaw
that makes it questionable to use... and that is for one reason alone
(regardless of my current problems.)
Consider..
If one triggers such a fault on a production server, how can one justify
transferring from backup multiple terabytes (or even petabytes now) of data to
repair an unmountable/faulted array.... because all backup solutions I know
currently would take days if not weeks to restore the sort of store ZFS is
touted with supporting.
Now, yes most production environments have multiple backing stores so will have
a server or ten to switch to whilst the store is being recovered, but it still
wouldn?t be a pleasant experience... not to mention the possibility that if one
store is corrupted there is a chance that the other store(s) would also be
affected in the same way if in the same DC... (Eg a DC fire - which I have seen)
.. and if you have multi DC stores to protect from that.. size of the pipes
between DCs comes clearly into play.
Thoughts?
Michelle