I'm curious if anyone has come up with a way to do this... I have a system here that has two pools -- one comprised of SSD disks that are the "most commonly used" things including user home directories and mailboxes, and another that is comprised of very large things that are far less-commonly used (e.g. video data files, media, build environments for various devices, etc.) The second pool has perhaps two dozen filesystems that are mounted, but again, rarely accessed.? However, despite them being rarely accessed ZFS performs various maintenance checkpoint functions on a nearly-continuous basis (it appears) because there's a low level, but not zero, amount of I/O traffic to and from them.? Thus if I set power control (e.g. spin down after 5 minutes of inactivity) they never do.? I could simply export the pool but I prefer (greatly) to not do that because some of the data on that pool (e.g. backups from PCs) is information that if a user wants to get to it it ought to "just work." Well, one disk is no big deal.? A rack full of them is another matter.? I could materially cut the power consumption of this box down (likely by a third or more) if those disks were spun down during 95% of the time the box is up, but with the "standard" way ZFS does things that doesn't appear to be possible. Has anyone taken a crack at changing the paradigm (e.g. using the automounter, perhaps?) to get around this? -- Karl Denninger karl at denninger.net <mailto:karl at denninger.net> /The Market Ticker/ /[S/MIME encrypted email preferred]/ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4897 bytes Desc: S/MIME Cryptographic Signature URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20191218/64b9805d/attachment.bin>
On Wed, Dec 18, 2019 at 9:22 AM Karl Denninger <karl at denninger.net> wrote:> I'm curious if anyone has come up with a way to do this... > > I have a system here that has two pools -- one comprised of SSD disks > that are the "most commonly used" things including user home directories > and mailboxes, and another that is comprised of very large things that > are far less-commonly used (e.g. video data files, media, build > environments for various devices, etc.) > > The second pool has perhaps two dozen filesystems that are mounted, but > again, rarely accessed. However, despite them being rarely accessed ZFS > performs various maintenance checkpoint functions on a nearly-continuous > basis (it appears) because there's a low level, but not zero, amount of > I/O traffic to and from them. Thus if I set power control (e.g. spin > down after 5 minutes of inactivity) they never do. I could simply > export the pool but I prefer (greatly) to not do that because some of > the data on that pool (e.g. backups from PCs) is information that if a > user wants to get to it it ought to "just work." > > Well, one disk is no big deal. A rack full of them is another matter. > I could materially cut the power consumption of this box down (likely by > a third or more) if those disks were spun down during 95% of the time > the box is up, but with the "standard" way ZFS does things that doesn't > appear to be possible. > > Has anyone taken a crack at changing the paradigm (e.g. using the > automounter, perhaps?) to get around this? > > -- > Karl Denninger > karl at denninger.net <mailto:karl at denninger.net> > /The Market Ticker/ > /[S/MIME encrypted email preferred]/ >I have, and I found that it wasn't actually ZFS's fault. By itself ZFS wasn't initiating any background I/O whatsoever. I used a combination of fstat and dtrace to track down the culprit processes. Once I had shutdown/patched/reconfigured each of those processes, the disks stayed idle indefinitely. You might have success using the same strategy. I suspect that the automounter wouldn't help you, because any access that ought to "just work" for a normal user would likewise "just work" for whatever background process is hitting your disks right now. -Alan
On Wed, 18 Dec 2019 17:22:16 +0100, Karl Denninger <karl at denninger.net> wrote:> I'm curious if anyone has come up with a way to do this... > > I have a system here that has two pools -- one comprised of SSD disks > that are the "most commonly used" things including user home directories > and mailboxes, and another that is comprised of very large things that > are far less-commonly used (e.g. video data files, media, build > environments for various devices, etc.)I'm using such a configuration for more than 10 years already, and didn't perceive the problems You describe. Disks are powered down with gstopd or other means, and they stay powered down until filesystems in the pool are actively accessed. A difficulty for me was that postgres autovacuum must be completeley disabled if there are tablespaces on the quiesced pools. Another thing that comes to mind is smartctl in daemon mode (but I never used that). There are probably a whole bunch more of potential culprits, so I suggest You work thru all the housekeeping stuff (daemons, cronjobs, etc.) to find it.