On Mon, 13 Aug 2018 10:56:01 -0400
"Kevin P. Neal" <kpn at neutralgood.org> wrote:
> On Sun, Aug 12, 2018 at 08:50:47PM +0200, Marco Steinbach wrote:
> > Hi there.
> >
> > % zpool list
> > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH
> > ALTROOT zroot 5.41T 670G 4.75T - 13% 12% 1.00x
> > ONLINE -
> >
> > % uname -a
> > FreeBSD XXX 11.1-STABLE FreeBSD 11.1-STABLE #0 r322984 [...] amd64
> >
> >
> > I'm running multiple jails on ZFS, using ezjail to manage them,
> > including a websever and a mailserver. The mailserver is using a
> > MySQL database, otherwise depending on dovecot and postfix. Very
> > low volume, just a few polls / logins per minute.
> >
> > I am experiencing very high loads as per gstat:
> >
> > dT: 1.021s w: 1.000s
> > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> > 4 181 0 0 0.0 169 873 20.1 98.2| ada0
> > 2 111 0 0 0.0 100 540 7.3 90.6| ada1
> > 0 88 0 0 0.0 76 458 1.4 43.3| ada2
> > 0 0 0 0 0.0 0 0 0.0 0.0|
> > ada0p1 3 150 0 0 0.0 150 603 20.2 95.1|
> > ada0p2 1 31 0 0 0.0 20 270 19.2 117.0|
> > ada0p3 0 0 0 0 0.0 0 0 0.0 0.0|
> > gpt/gptboot0 0 0 0 0 0.0 0 0 0.0
> > 0.0| ada1p1 1 85 0 0 0.0 85 341 8.4
> > 68.9| ada1p2 1 25 0 0 0.0 15 200 0.9
> > 75.0| ada1p3 0 0 0 0 0.0 0 0 0.0
> > 0.0| ada2p1 0 62 0 0 0.0 62 251 1.6
> > 9.9| ada2p2 0 26 0 0 0.0 15 208 0.5
> > 42.0| ada2p3 0 0 0 0 0.0 0 0 0.0
> > 0.0| gpt/gptboot1 0 0 0 0 0.0 0 0
> > 0.0 0.0| gpt/gptboot2
>
> Say, these don't look very evenly spread.
>
> What's ada0p3?
I posted 'list', instead of 'zpool status' :/
It's the third drive in my raidz1.
> > These loads lead to the system suffering from very much delayed
> > responses to even the basic task of echoing characters entered on
> > the console, consequently rendering the services offered unusable
> > to the users because of the delays.
>
> How much memory does your machine have, and how much is being used by
> ZFS? You can get that from the default top display if you want.
>
> You can also use 'zpool iostat <interval>' to see how much
traffic is
> going to specifically ZFS.
>
I see a lot of imbalanced IO on my zpools, although I am using the
exact same model and firmware level drives in my raidz1 setups.
All my machines have at least 32GB, I've limited the maximum ARC size
to 4GB, though, since last time I checked ZFS took quite a large piece
of the cake, and didn't give it back, when not limited by setting
vfs.zfs.arc_max in loader.conf :)
I've since found the culprit, I think. Please see my other post in this
thread.
MfG CoCo