Hi! I''ve been monitoring my arrays lately, and to me it seems like the zfs allocator might be misfiring a bit. This is all on OI 147 and if there is a problem and a fix, i''d like to see it in the next image-update =D Here''s some 60s iostat cleaned up a bit: tank 3.76T 742G 1.03K 1.02K 3.96M 4.26M raidz1 2.77T 148G 718 746 2.73M 3.03M raidz1 1023G 593G 333 300 1.23M 1.22M and another pool: fast 130G 169G 2.72K 110 11.0M 614K mirror 37.9G 61.6G 1.35K 29 5.48M 154K mirror 27.6G 71.9G 1.36K 37 5.51M 203K mirror 64.1G 35.9G 3 42 21.8K 240K To me it seems that writes are not directed properly to the devices that have most free space - almost exactly the opposite. The writes seem to go to the devices that have _least_ free space, instead of the devices that have most free space. The same effect that can be seen in these 60s averages can also be observed in a shorter timespan, like a second or so. Is there something obvious I''m missing? -- - Tuomas
> To me it seems that writes are not directed properly to the devices that have most free space - almost exactly the opposite. The writes seem to go to the devices that have _least_ free space, instead of the devices that have most free space. The same effect that can be seen in these 60s averages can also be observed in a shorter timespan, like a second or so.> Is there something obvious I''m missing?Not sure how OI should behave, I''ve managed to even writes & space usage between vdevs by bringing device offline in vdev you don''t want to writes end up to. If you have degraded vdev in your pool, zfs will try not to write there, and this may be the case here as well as I don''t see zpool status output. Yours Markus Kovero
Thanks for the input. This was not a case of degraded vdev, but only a missing log device (which i cannot get rid of..). I''ll try offlining some vdevs and see what happens - altough this should be automatic atf all times IMO. On Jun 30, 2011 1:25 PM, "Markus Kovero" <Markus.Kovero at nebula.fi> wrote:> > >> To me it seems that writes are not directed properly to the devices thathave most free space - almost exactly the opposite. The writes seem to go to the devices that have _least_ free space, instead of the devices that have most free space. The same effect that can be seen in these 60s averages can also be observed in a shorter timespan, like a second or so.> >> Is there something obvious I''m missing? > > > Not sure how OI should behave, I''ve managed to even writes & space usagebetween vdevs by bringing device offline in vdev you don''t want to writes end up to.> If you have degraded vdev in your pool, zfs will try not to write there,and this may be the case here as well as I don''t see zpool status output.> > Yours > Markus Kovero >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110630/5d8248af/attachment.html>
Sorry everyone, this one was indeed a case of root stupidity. I had forgotten to upgrade to OI 148, which apparently fixed the write balancer. Duh. (didn''t find full changelog from google tho.) On Jun 30, 2011 3:12 PM, "Tuomas Leikola" <tuomas.leikola at gmail.com> wrote:> Thanks for the input. This was not a case of degraded vdev, but only a > missing log device (which i cannot get rid of..). I''ll try offlining some > vdevs and see what happens - altough this should be automatic atf alltimes> IMO. > On Jun 30, 2011 1:25 PM, "Markus Kovero" <Markus.Kovero at nebula.fi> wrote: >> >> >>> To me it seems that writes are not directed properly to the devices that > have most free space - almost exactly the opposite. The writes seem to goto> the devices that have _least_ free space, instead of the devices that have > most free space. The same effect that can be seen in these 60s averagescan> also be observed in a shorter timespan, like a second or so. >> >>> Is there something obvious I''m missing? >> >> >> Not sure how OI should behave, I''ve managed to even writes & space usage > between vdevs by bringing device offline in vdev you don''t want to writes > end up to. >> If you have degraded vdev in your pool, zfs will try not to write there, > and this may be the case here as well as I don''t see zpool status output. >> >> Yours >> Markus Kovero >>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110702/6ac93f84/attachment.html>