Hi All, In reading the ZFS Best practices, I''m curious if this statement is still true about 80% utilization. from : http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide .... <http://www.solarisinternals.com/wiki/index.php?title=ZFS_Best_Practices_Guide&action=edit§ion=12>Storage Pool Performance Considerations ..... Keep pool space under 80% utilization to maintain pool performance. Currently, pool performance can degrade when a pool is very full and file systems are updated frequently, such as on a busy mail server. Full pools might cause a performance penalty, but no other issues. .... Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110225/464f5824/attachment.html>
Hi Dave, Still true. Thanks, Cindy On 02/25/11 13:34, David Blasingame Oracle wrote:> Hi All, > > In reading the ZFS Best practices, I''m curious if this statement is > still true about 80% utilization. > > from : > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > .... > > > <http://www.solarisinternals.com/wiki/index.php?title=ZFS_Best_Practices_Guide&action=edit§ion=12>Storage > Pool Performance Considerations > > ..... > Keep pool space under 80% utilization to maintain pool performance. > Currently, pool performance can degrade when a pool is very full and > file systems are updated frequently, such as on a busy mail server. Full > pools might cause a performance penalty, but no other issues. > > .... > > Dave > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:> Hi All, > > In reading the ZFS Best practices, I''m curious if this statement is > still true about 80% utilization.It happens at about 90% for me.. all of a sudden, the mail server got butt slow.. killed an old snapshot to get to 85% free or so, then it got snappy again. S10u9 sparc.> from : > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > .... > > > <http://www.solarisinternals.com/wiki/index.php?title=ZFS_Best_Practices_Guide&action=edit§ion=12>Storage > Pool Performance Considerations > > ..... > Keep pool space under 80% utilization to maintain pool performance. > Currently, pool performance can degrade when a pool is very full and > file systems are updated frequently, such as on a busy mail server. Full > pools might cause a performance penalty, but no other issues. > > .... > > Dave > >> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss/Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On 2/25/2011 3:49 PM, Tomas ?gren wrote:> On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > >> > Hi All, >> > >> > In reading the ZFS Best practices, I''m curious if this statement is >> > still true about 80% utilization. > It happens at about 90% for me.. all of a sudden, the mail server got > butt slow.. killed an old snapshot to get to 85% free or so, then it got > snappy again. S10u9 sparc.Some of the recent updates have pushed the 80% watermark closer to 90% for most workloads.
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of David Blasingame Oracle > > Keep pool space under 80% utilization to maintain pool performance.For what it''s worth, the same is true for any other filesystem too. What really matters is the availability of suitably large sized unused sections of the hard drive. The larger the total space in your storage, the higher the percentage of used can be, while maintaining enough unused space to perform reasonably well. The more sequential your IO operations are, the less fragmentation you''ll experience, and the less a problem there will be. If your workload is highly random, with a mixture of large & small operations, with lots of snapshots being created and destroyed all the time, then you''ll be fragmenting the drive quite a lot and experience this more. The 80% or 90% thing is just a rule of thumb. But you positively DON''T want to hit 100% full. I''ve had this happen and been required to power cycle and remove things in single user mode in order to bring it back up. It''s not as if 100% full is certain to cause a problem... I can look up details if someone wants to know... There is a specific condition that only occurs sometimes when 100% full, which essentially makes the system unusable. But there is one specific thing, isn''t there? Where ZFS will choose to use a different algorithm for something, when pool usage exceeds some threshold. Right? What is that?
> In reading the ZFS Best practices, I''m curious if this statement is > still true about 80% utilization.It is, and in my experience, it doesn''t matter much if you have a full pool and add another VDEV, the existing VDEVs will be full still, and performance will be slow. For this reason, new systems are setup with more, smaller drives to help upgrade later by replacing the drives with larger ones. Hopefully, we might see block pointer rewrite some time in the future to help rebalance pools. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
On Sun, Feb 27, 2011 at 6:59 AM, Edward Ned Harvey <opensolarisisdeadlongliveopensolaris at nedharvey.com> wrote:> But there is one specific thing, isn''t there? ?Where ZFS will choose to use > a different algorithm for something, when pool usage exceeds some threshold. > Right? ?What is that?It moves from "best fit" to "any fit" at a certain point, which is at ~ 95% (I think). Best fit looks for a large contiguous space to avoid fragmentation while any fit looks for any free space. -B -- Brandon High : bhigh at freaks.com
On 27/02/11 9:59 AM, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of David Blasingame Oracle >> >> Keep pool space under 80% utilization to maintain pool performance. > > For what it''s worth, the same is true for any other filesystem too.I would expect COW puts more pressure on near-full behaviour compared to write-in-place filesystems. If that''s not true, somebody correct me. --Toby> What > really matters is the availability of suitably large sized unused sections > of the hard drive. ... > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Mon, Feb 28 at 0:30, Toby Thain wrote:>I would expect COW puts more pressure on near-full behaviour compared to >write-in-place filesystems. If that''s not true, somebody correct me.Off the top of my head, I think it''d depend on the workload. Write-in-place will always be faster with large IOs than with smaller IOs, and write-in-place will always be faster than CoW with large enough IO because there''s no overhead for choosing where the write goes (and with large enough IO, seek overhead ~= 0) With CoW, it probably matters more what the previous version of the LBAs you''re overwriting looked like, plus how fragmented the free space is. Into a device with plenty of free space, small writes should be significantly faster than write-in-place. --eric -- Eric D. Mudama edmudama at bounceswoosh.org
On 2/25/2011 4:15 PM, Torrey McMahon wrote:> On 2/25/2011 3:49 PM, Tomas ?gren wrote: >> On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: >> >>> > Hi All, >>> > >>> > In reading the ZFS Best practices, I''m curious if this statement is >>> > still true about 80% utilization. >> It happens at about 90% for me.. all of a sudden, the mail server got >> butt slow.. killed an old snapshot to get to 85% free or so, then it got >> snappy again. S10u9 sparc. > > Some of the recent updates have pushed the 80% watermark closer to 90% > for most workloads.Sorry folks. I was thinking of "yet an other" change that was in the allocation algorithms. 80% is number to stick with. ... now where did I put my cold medicine? :)
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High <bhigh at freaks.com> wrote:> It moves from "best fit" to "any fit" at a certain point, which is at > ~ 95% (I think). Best fit looks for a large contiguous space to avoid > fragmentation while any fit looks for any free space.I got the terminology wrong, it''s first-fit when there is space, moving to best-fit at 96% full. See http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/metaslab.c for details. -B -- Brandon High : bhigh at freaks.com