Hi.
I'm using FreeBSD 10.1-STABLE as an application server, last week I've
noticed that disks are always busy while gstat shows that the activity
measured in iops/reads/writes is low, form my point of view:
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
8 56 50 520 160.6 6 286 157.4 100.2 gpt/zfsroot0
8 56 51 1474 162.8 5 228 174.4 99.9 gpt/zfsroot1
These %busy numbers arent't changing much, and from my point of view
both disks do very little.
zpool iostat:
[root at gw0:~]# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 90 131 1,17M 1,38M
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 113 93 988K 418K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 112 0 795K 93,8K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 109 55 1,28M 226K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 112 116 1,36M 852K
---------- ----- ----- ----- ----- ----- -----
zfsroot 270G 186G 105 47 1,44M 1,61M
---------- ----- ----- ----- ----- ----- -----
What can cause this ?
Pool is fragmented indeed, but I have others server with comparable
amount of fragmentation, and no signs of busyness while reads/writes are
that low.
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfsroot 456G 270G 186G - 51% 59% 1.00x ONLINE -
Loader settings:
vfs.root.mountfrom="zfs:zfsroot"
vfs.zfs.arc_max="2048M"
vfs.zfs.zio.use_uma=1
I've tried to play with vfs.zfs.zio.use_uma, but without any noticeable
effect. I've also tried to add separate log devices - this didn't help
either.
Thanks.
Eugene.