I was presenting to a customer at the EBC yesterday, and one of the people at the meeting said using df in ZFS really drives him crazy (no, that''s all the detail I have). Any ideas/suggestions? -- David Runyon Disk Sales Specialist Sun Microsystems, Inc. 4040 Palm Drive Santa Clara, CA 95054 US Mobile 925 323-1211 Email David.Runyon at Sun.COM
I asked this recently, but haven''t done anything else about it: http://www.opensolaris.org/jive/thread.jspa?messageID=155583𥾿 This message posted from opensolaris.org
On 10/17/07, David Runyon <David.Runyon at sun.com> wrote:> I was presenting to a customer at the EBC yesterday, and one of the > people at the meeting said using df in ZFS really drives him crazy (no, > that''s all the detail I have). Any ideas/suggestions?I suspect that this is related to the notion that file systems are cheap and the traditional notion of quotas is replaced by cheap file systems. This makes it so that a system with 1000 users that previously had a small number of file systems now has over 1000 file systems. What used to be relatively simple output from df now turns into 40+ screens[1] on the default sized terminal window. 1. If you are in this situation, there is a good chance that the formatting of df cause line folding or wrapping that doubles the number of lines to 80+ screens of df output. -- Mike Gerdts http://mgerdts.blogspot.com/
David Runyon wrote:> I was presenting to a customer at the EBC yesterday, and one of the > people at the meeting said using df in ZFS really drives him crazy (no, > that''s all the detail I have). Any ideas/suggestions?Filter it. This is UNIX after all... -- richard
On Oct 18, 2007, at 11:57, Richard Elling wrote:> David Runyon wrote: >> I was presenting to a customer at the EBC yesterday, and one of the >> people at the meeting said using df in ZFS really drives him crazy >> (no, >> that''s all the detail I have). Any ideas/suggestions? > > Filter it. This is UNIX after all...err - no .. i can understand that when I put my old SA helmet on .. if you look at the avail capacity number below we''ve really got an overprovisioned number if you''re not doing quotas - this kind of thing can drive you batty particularly when you''re used to looking at df to quickly see how much space you''ve got left on the system .. it''s like asking how many seats are available on this plane, and they tell you the amount of available seats on the airline root at pimpmobile # df -h Filesystem size used avail capacity Mounted on /dev/dsk/c5t0d0s0 454G 12G 437G 3% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 8.4G 876K 8.4G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /usr/lib/libc/libc_hwcap2.so.1 454G 12G 437G 3% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 8.4G 40K 8.4G 1% /tmp swap 8.4G 24K 8.4G 1% /var/run /dev/dsk/c5t0d0s5 3.9G 1.8G 2.1G 46% /var/crash2 log-pool 457G 120M 447G 1% /log-pool thumper-pool/n01_oraadmin1 16T 1.4G 13T 1% /n01/oraadmin1 thumper-pool/n01_oraarch1 16T 159M 13T 1% /n01/oraarch1 thumper-pool/n01_oradata1 16T 98G 13T 1% /n01/oradata1 thumper-pool/tst08a_ctl1 16T 17M 13T 1% /s01/controlfile1 thumper-pool/tst08a_ctl2 16T 17M 13T 1% /s01/controlfile2 thumper-pool/tst08a_ctl3 16T 17M 13T 1% /s01/controlfile3 thumper-pool/tst32a_data 16T 135G 13T 1% /s01/oradata1/tst32 thumper-pool 16T 1.1T 13T 8% /thumper-pool thumper-pool/home 16T 45K 13T 1% /thumper-pool/home thumper-pool/home/db2inst1 16T 163G 13T 2% /thumper-pool/ home/db2inst1 thumper-pool/home/kurt 16T 223K 13T 1% /thumper-pool/ home/kurt thumper-pool/home/mahadev 16T 40K 13T 1% /thumper-pool/ home/mahadev thumper-pool/mrd-data 16T 75G 13T 1% /thumper-pool/ mrd-data thumper-pool/software 16T 6.3G 13T 1% /thumper-pool/ software thumper-pool/u01 16T 5.2G 13T 1% /u01 thumper-pool/tst08a_data 16T 761G 13T 6% /s01/oradata1/tst08 log-pool/swim 50G 24K 50G 1% /log-pool/swim log-pool/butterfinger 457G 24K 457G 1% /log-pool/ butterfinger
[warning: paradigm shifted] Jonathan Edwards wrote:> On Oct 18, 2007, at 11:57, Richard Elling wrote: > >> David Runyon wrote: >>> I was presenting to a customer at the EBC yesterday, and one of the >>> people at the meeting said using df in ZFS really drives him crazy (no, >>> that''s all the detail I have). Any ideas/suggestions? >> >> Filter it. This is UNIX after all... > > err - no .. i can understand that when I put my old SA helmet on .. if > you look at the avail capacity number below we''ve really got an > overprovisioned number if you''re not doing quotas - this kind of thing > can drive you batty particularly when you''re used to looking at df to > quickly see how much space you''ve got left on the system .. it''s like > asking how many seats are available on this plane, and they tell you the > amount of available seats on the airlineYes. It is true that ZFS redefines the meaning of available space. But most people like compression, snapshots, clones, and the pooling concept. It may just be that you want zfs list instead, df is old-school :-) OTOH, df does have a notion of file system specific options. It might be useful to have a df_zfs option which would effectively show the zfs list-like data. BTW, airlines also overprovision seats, which is why you might sometimes get bumped. Hotels do this as well. -- richard> root at pimpmobile # df -h > Filesystem size used avail capacity Mounted on > /dev/dsk/c5t0d0s0 454G 12G 437G 3% / > /devices 0K 0K 0K 0% /devices > ctfs 0K 0K 0K 0% /system/contract > proc 0K 0K 0K 0% /proc > mnttab 0K 0K 0K 0% /etc/mnttab > swap 8.4G 876K 8.4G 1% /etc/svc/volatile > objfs 0K 0K 0K 0% /system/object > /usr/lib/libc/libc_hwcap2.so.1 > 454G 12G 437G 3% /lib/libc.so.1 > fd 0K 0K 0K 0% /dev/fd > swap 8.4G 40K 8.4G 1% /tmp > swap 8.4G 24K 8.4G 1% /var/run > /dev/dsk/c5t0d0s5 3.9G 1.8G 2.1G 46% /var/crash2 > log-pool 457G 120M 447G 1% /log-pool > thumper-pool/n01_oraadmin1 > 16T 1.4G 13T 1% /n01/oraadmin1 > thumper-pool/n01_oraarch1 > 16T 159M 13T 1% /n01/oraarch1 > thumper-pool/n01_oradata1 > 16T 98G 13T 1% /n01/oradata1 > thumper-pool/tst08a_ctl1 > 16T 17M 13T 1% /s01/controlfile1 > thumper-pool/tst08a_ctl2 > 16T 17M 13T 1% /s01/controlfile2 > thumper-pool/tst08a_ctl3 > 16T 17M 13T 1% /s01/controlfile3 > thumper-pool/tst32a_data > 16T 135G 13T 1% /s01/oradata1/tst32 > thumper-pool 16T 1.1T 13T 8% /thumper-pool > thumper-pool/home 16T 45K 13T 1% /thumper-pool/home > thumper-pool/home/db2inst1 > 16T 163G 13T 2% > /thumper-pool/home/db2inst1 > thumper-pool/home/kurt > 16T 223K 13T 1% /thumper-pool/home/kurt > thumper-pool/home/mahadev > 16T 40K 13T 1% > /thumper-pool/home/mahadev > thumper-pool/mrd-data > 16T 75G 13T 1% /thumper-pool/mrd-data > thumper-pool/software > 16T 6.3G 13T 1% /thumper-pool/software > thumper-pool/u01 16T 5.2G 13T 1% /u01 > thumper-pool/tst08a_data > 16T 761G 13T 6% /s01/oradata1/tst08 > log-pool/swim 50G 24K 50G 1% /log-pool/swim > log-pool/butterfinger > 457G 24K 457G 1% /log-pool/butterfinger > >
On Oct 18, 2007, at 13:26, Richard Elling wrote:> > Yes. It is true that ZFS redefines the meaning of available space. > But > most people like compression, snapshots, clones, and the pooling > concept. > It may just be that you want zfs list instead, df is old-school :-)exactly - i''m not complaining .. just understanding the confusion I don''t think anticipate deprecating df in favor of "zfs list", but df_zfs or additonal flags to df might be helpful .. perhaps a pool option, and some sort of easy visual to say that the avail number you''re looking at is shared .. perhaps something like this (sorted output would be nice too by default): # df -F zfs -xh Filesystem size used resv avail capacity Mounted on ... log-pool (457G) 120M --- (447G) 1% /log-pool log-pool/butterfinger (457G) 24K 10G (457G) 1% /log-pool/ butterfinger log-pool/swim [50G] 24K --- [50G] 1% /log-pool/swim thumper-pool (16T) 1.1T --- (13T) 8% /thumper-pool thumper-pool/home (16T) 46K --- (13T) 1% /thumper- pool/home essentially just some way to tell at a glance that the capacity is either (shared) or a [quota]> OTOH, df does have a notion of file system specific options. It > might be > useful to have a df_zfs option which would effectively show the zfs > list-like > data.yeah - i''m thinking it might be helpful to see reserved capacity here by default, or at least have a switch for it instead of having to alias "zfs list -o name,used,reservation,available,refer,mountpoint" .. i''m always thrown at first glance by that one: NAME USED RESERV AVAIL REFER MOUNTPOINT log-pool 10.1G none 447G 120M /log-pool log-pool/butterfinger 24.5K 10G 457G 24.5K /log-pool/ butterfinger log-pool/swim 24.5K none 50.0G 24.5K /log-pool/swim thumper-pool 2.63T none 12.9T 1.11T /thumper-pool thumper-pool/home 163G none 12.9T 45.7K /thumper-pool/home> BTW, airlines also overprovision seats, which is why you might > sometimes > get bumped. Hotels do this as well.my point as well - meaning you''re never sure if you''re going to get a seat especially if there''s a rush .. sorry looking back it''s kind of a bad analogy --- .je