tech-lists
2019-Apr-07  15:36 UTC
about zfs and ashift and changing ashift on existing zpool
Hello,
I have this in sysctl.conf on a desktop machine (12-stable):
vfs.zfs.min_auto_ashift=12
this has not always been there. I guess the zpool pre-dates it. I only
noticed it because have recently had to replace a disk in its zfs array
when I saw this:
% zpool status
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered.  The pool
will continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Apr  7 03:09:42 2019
        3.46T scanned at 79.5M/s, 2.73T issued at 62.8M/s, 3.46T total
        931G resilvered, 78.94% done, 0 days 03:22:41 to go
config:
NAME             STATE     READ WRITE CKSUM
storage          ONLINE       0     0     0
  raidz1-0       ONLINE       0     0     0
    replacing-0  ONLINE       0     0 1.65K
      ada2       ONLINE       0     0     0
      ada1       ONLINE       0     0     0  block size: 512B configured, 4096B
native
    ada3         ONLINE       0     0     0
    ada4         ONLINE       0     0     0
What I'd like to know is:
1. is the above situation harmful to data
2. given that vfs.zfs.min_auto_ashift=12, why does it still say 512B
   configured for ada1 which is the new disk, or..
3. does "configured" pertain to the pool, the disk, or both
4. what would be involved in making them all 4096B
5. does a 512B disk wear out faster than 4096B (all other things being
   equal)
Given that the machine and disks were new in 2016, I can't understand why
zfs
didn't default to 4096B on installation
thanks,
-- 
J.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL:
<http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20190407/f31989cf/attachment.sig>
Peter Jeremy
2019-Apr-08  21:28 UTC
about zfs and ashift and changing ashift on existing zpool
On 2019-Apr-07 16:36:40 +0100, tech-lists <tech-lists at zyxst.net> wrote:>storage ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > replacing-0 ONLINE 0 0 1.65K > ada2 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 block size: 512B configured, 4096B native > ada3 ONLINE 0 0 0 > ada4 ONLINE 0 0 0 > >What I'd like to know is: > >1. is the above situation harmful to dataIn general no. The only danger is that ZFS is updating the uberblock replicas at the start and end of the volume assuming 512B sectors which means you are at a higher risk or losing one of the replica sets if a power failure occurs during an uberblock update.>2. given that vfs.zfs.min_auto_ashift=12, why does it still say 512B > configured for ada1 which is the new disk, or..The pool is configured with ashift=9.>3. does "configured" pertain to the pool, the disk, or both"configured" relates to the pool - all vdevs match the pool>4. what would be involved in making them all 4096BRebuild the pool - backup/destroy/create/restore>5. does a 512B disk wear out faster than 4096B (all other things being > equal)It shouldn't. It does mean that the disk is doing read/modify/write at the physical sector level but that should be masked by the drive cache.>Given that the machine and disks were new in 2016, I can't understand why zfs >didn't default to 4096B on installationI can't answer that easily. The current version of ZFS looks at the native disk blocksize to determine the pool ashift but I'm not sure how things were in 2016. Possibilities include: * The pool was built explicitly with ashift=9 * The initial disks reported 512B native (I think this is most likely) * That version of ZFS was using logical, rather than native blocksize. My guess (given that only ada1 is reporting a blocksize mismatch) is that your disks reported a 512B native blocksize. In the absence of any override, ZFS will then build an ashift=9 pool. -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20190409/fc995fd3/attachment.sig>