Hi.
I'm using FreeBSD 10.1-STABLE as an application server, last week I've
noticed that disks are always busy while gstat shows that the activity
measured in iops/reads/writes is low, form my point of view:
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
8 56 50 520 160.6 6 286 157.4 100.2 gpt/zfsroot0
8 56 51 1474 162.8 5 228 174.4 99.9 gpt/zfsroot1
These %busy numbers arent't changing much, and from my point of view
both disks do very little.
zpool iostat:
[root at gw0:~]# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 90 131 1,17M 1,38M
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 113 93 988K 418K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 112 0 795K 93,8K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 109 55 1,28M 226K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 112 116 1,36M 852K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 105 47 1,44M 1,61M
---------- ----- ----- ----- ----- ----- -----
What can cause this ?
Pool is fragmented indeed, but I have others server with comparable
amount of fragmentation, and no signs of busyness while reads/writes are
that low.
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfsroot 456G 270G 186G - 51% 59% 1.00x ONLINE -
Loader settings:
vfs.root.mountfrom="zfs:zfsroot"
vfs.zfs.arc_max="2048M"
vfs.zfs.zio.use_uma=1
I've tried to play with vfs.zfs.zio.use_uma, but without any noticeable
effect. I've also tried to add separate log devices - this didn't help
either.
Thanks.
Eugene.
On Thu, 26 Nov 2015 14:19:18 +0500 "Eugene M. Zheganin" <emz at norma.perm.ru> wrote:> Hi. > > I'm using FreeBSD 10.1-STABLE as an application server, last week I've > noticed that disks are always busy while gstat shows that the activity > measured in iops/reads/writes is low, form my point of view: >Hi. You have processes with STATE zio->i ?
Hi. On 26.11.2015 14:19, Eugene M. Zheganin wrote:> Hi. > > I'm using FreeBSD 10.1-STABLE as an application server, last week I've > noticed that disks are always busy while gstat shows that the activity > measured in iops/reads/writes is low, form my point of view: > > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 8 56 50 520 160.6 6 286 157.4 100.2 gpt/zfsroot0 > 8 56 51 1474 162.8 5 228 174.4 99.9 gpt/zfsroot1 > > These %busy numbers arent't changing much, and from my point of view > both disks do very little. >The thing is, it was the compression. As soon as I cleared the gzip compression from busy datasets, %busy went down, almost to zero. Affected datasets were filled with poorly compressionable files, mostly archives or zlib-compressed data. And this is kind of counter-intuitive: one could think that worse-case scenario would be redundant CPU load, with constand disk i/o. In practice, otherwise, high disk %busy happens. Could someone explain that ? I only found this because of the flow-capture was starting like for years, and I started to suspect the compression setting. Eugene.