One of the things we monitor is the usage of filesystems, alarming when they get too close to filling up (since some applications aren''t happy when they encounter full filesystems). The way it''s done today is just a script that periodically parses the output of df. I think we need to take a different approach with zfs. Based on my understanding of zfs, if you do something like this: zpool create foo ... zfs create foo/bar/1 zfs create foo/bar/2 ... zfs create foo/bar/50 zfs set quota=1G foo/bar The quota set will limit the combined usage of foo/bar/1 through foo/bar/50 to 1G total. If then that quota is hit, a df will show all 50 filesystems being 100% full. Our current setup will then generate 50 alarms (one per filesystem), which then generates 50 tickets, which causes 50 page outs, etc -- not a desirable situation. With ZFS, the actual limit (from my understanding) on the filesystem is controlled by free space in the pool, as well as any quotas set either on the filesystem, or an ancestor. It would seem that instead of monitoring each ZFS filesystem, we would want to monitor where those limits are imposed -- i.e. look at the free space in the pool, and where the quotas are defined. Is there any good way to determine those points other than doing a zfs get on every ZFS filesystem to find where the quotas are set? This message posted from opensolaris.org
On Thu, Dec 01, 2005 at 09:45:28AM -0800, Jason King wrote:> > With ZFS, the actual limit (from my understanding) on the filesystem > is controlled by free space in the pool, as well as any quotas set > either on the filesystem, or an ancestor. It would seem that instead > of monitoring each ZFS filesystem, we would want to monitor where > those limits are imposed -- i.e. look at the free space in the pool, > and where the quotas are defined. Is there any good way to determine > those points other than doing a zfs get on every ZFS filesystem to > find where the quotas are set?% zfs list -o name,quota,available NAME QUOTA AVAIL pool none 7.66G pool/aux0 none 7.66G pool/aux1 5.00G 271M pool/opt none 7.66G You can use ''-P'' to get a parsable form which is tab-delimited. Cheers, - jonathan -- Jonathan Adams, Solaris Kernel Development
On Thu, Dec 01, 2005 at 09:45:28AM -0800, Jason King wrote:> > With ZFS, the actual limit (from my understanding) on the filesystem > is controlled by free space in the pool, as well as any quotas set > either on the filesystem, or an ancestor. It would seem that instead > of monitoring each ZFS filesystem, we would want to monitor where > those limits are imposed -- i.e. look at the free space in the pool, > and where the quotas are defined. Is there any good way to determine > those points other than doing a zfs get on every ZFS filesystem to > find where the quotas are set?You can use the following invocation: $ zfs get -rH -o name -s local quota <pool> This will give you a parsable list of all filesystems within the pool have a locally set quota. -r Recursive -H Scripted, don''t print out ''NAME'' header -o name Print only the name of matching datasets -s local Only print values that are set locally quota Only print ''quota'' properties If you want the parsable numeric quota values as well (separated by tabs), try: $ zfs get -rHp -o name,value -s local quota <pool> While potentially confusing, this is a good example of how we made ''zfs get'' powerful enough to answer all these questions. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Thu, Dec 01, 2005 at 12:15:02PM -0800, Jonathan Adams wrote:> % zfs list -o name,quota,available > NAME QUOTA AVAIL > pool none 7.66G > pool/aux0 none 7.66G > pool/aux1 5.00G 271M > pool/opt none 7.66G > > You can use ''-P'' to get a parsable form which is tab-delimited.Actually it''s ''-H'' (copied from svcs). Note that ''-p'', which prints the numbers in absolute values, isn''t supported in ''zfs list''. Between this and the ''zfs get'' version (which have slightly different semantics) you should be able to find what you''re looking for. Let us know if you can''t get the info you need. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock