Are there way to force ZFS to update, or refresh it in some way when the user quota/used value is not true to what is the case? Are there known way to make it out of sync that we should avoid? SunOS x4500-11.unix 5.10 Generic_141445-09 i86pc i386 i86pc (Solaris 10 10/09 u8) zpool1/sd01_mail 223M 15.6T 222M /export/sd01/mail # zfs userspace zpool1/sd01_mail TYPE NAME USED QUOTA POSIX User 1029 54.0M 100M # df -h . Filesystem size used avail capacity Mounted on zpool1/sd01_mail 16T 222M 16T 1% /export/sd01/mail # ls -lhn total 19600 -rw------- 1 1029 2100 1.7K Oct 20 12:03 1256007793.V4700025I1770M252506.vmx06.unix:2,S -rw------- 1 1029 2100 1.7K Oct 20 12:04 1256007873.V4700025I1772M63715.vmx06.unix:2,S -rw------- 1 1029 2100 1.6K Oct 20 12:05 1256007926.V4700025I1773M949133.vmx06.unix:2,S -rw------- 1 1029 2100 76M Oct 20 12:23 1256009005.V4700025I1791M762643.vmx06.unix:2,S -rw------- 1 1029 2100 54M Oct 20 12:36 1256009769.V4700034I179eM739748.vmx05.unix:2,S -rw------T 1 1029 2100 2.0M Oct 20 14:39 file The 54M file appears to to accounted for, but the 76M is not. I recently added a 2M by chown to see if it was a local-disk, vs NFS problem. The previous had not updated for 2 hours. # zfs get userused at 1029 zpool1/sd01_mail NAME PROPERTY VALUE SOURCE zpool1/sd01_mail userused at 1029 54.0M local Any suggestions would be most welcome, Lund -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
The user/group used can be out of date by a few seconds, same as the "used" and "referenced" properties. You can run sync(1M) to wait for these values to be updated. However, that doesn''t seem to be the problem you are encountering here. Can you send me the output of: zfs list zpool1/sd01_mail zfs get all zpool1/sd01_mail zfs userspace -t all zpool1/sd01_mail ls -ls /export/sd01/mail zdb -vvv zpool1/sd01_mail --matt Jorgen Lundman wrote:> > Are there way to force ZFS to update, or refresh it in some way when the > user quota/used value is not true to what is the case? Are there known > way to make it out of sync that we should avoid? > > SunOS x4500-11.unix 5.10 Generic_141445-09 i86pc i386 i86pc > (Solaris 10 10/09 u8) > > > zpool1/sd01_mail 223M 15.6T 222M /export/sd01/mail > > > # zfs userspace zpool1/sd01_mail > TYPE NAME USED QUOTA > POSIX User 1029 54.0M 100M > > # df -h . > Filesystem size used avail capacity Mounted on > zpool1/sd01_mail 16T 222M 16T 1% /export/sd01/mail > > > # ls -lhn > total 19600 > -rw------- 1 1029 2100 1.7K Oct 20 12:03 > 1256007793.V4700025I1770M252506.vmx06.unix:2,S > -rw------- 1 1029 2100 1.7K Oct 20 12:04 > 1256007873.V4700025I1772M63715.vmx06.unix:2,S > -rw------- 1 1029 2100 1.6K Oct 20 12:05 > 1256007926.V4700025I1773M949133.vmx06.unix:2,S > -rw------- 1 1029 2100 76M Oct 20 12:23 > 1256009005.V4700025I1791M762643.vmx06.unix:2,S > -rw------- 1 1029 2100 54M Oct 20 12:36 > 1256009769.V4700034I179eM739748.vmx05.unix:2,S > -rw------T 1 1029 2100 2.0M Oct 20 14:39 file > > The 54M file appears to to accounted for, but the 76M is not. I recently > added a 2M by chown to see if it was a local-disk, vs NFS problem. The > previous had not updated for 2 hours. > > > # zfs get userused at 1029 zpool1/sd01_mail > NAME PROPERTY VALUE SOURCE > zpool1/sd01_mail userused at 1029 54.0M local > > > Any suggestions would be most welcome, > > Lund > >
On 20 October, 2009 - Matthew Ahrens sent me these 2,2K bytes:> The user/group used can be out of date by a few seconds, same as the > "used" and "referenced" properties. You can run sync(1M) to wait for > these values to be updated. However, that doesn''t seem to be the problem > you are encountering here. > > Can you send me the output of: > > zfs list zpool1/sd01_mail > zfs get all zpool1/sd01_mail > zfs userspace -t all zpool1/sd01_mail > ls -ls /export/sd01/mail > zdb -vvv zpool1/sd01_mailOn a related note, there is a way to still have quota used even after all files are removed, S10u8/SPARC: # zfs create rpool/quotatest # zfs set userquota at stric=5M rpool/quotatest # zfs userspace -t all rpool/quotatest TYPE NAME USED QUOTA POSIX Group root 3K none POSIX User root 3K none POSIX User stric 0 5M # chmod a+rwt /rpool/quotatest stric% cd /rpool/quotatest;tar jxvf /somewhere/gimp-2.2.10.tar.bz2 ... wait and it will start getting Disc quota exceeded, might have to help it by running ''sync'' in another terminal stric% sync stric% rm -rf gimp-2.2.10 stric% sync ... now it''s all empty.. but... # zfs userspace -t all rpool/quotatest TYPE NAME USED QUOTA POSIX Group root 3K none POSIX Group tdb 3K none POSIX User root 3K none POSIX User stric 3K 5M Can be repeated for even more "lost blocks", I seem to get between 3 and 5 kB each time. I tried this last night, and when I got back in the morning, it had gone down to zero again. Haven''t done any more verifying than that. It doesn''t seem to trigger if I just write a big file with dd which gets me into DQE, but unpacking a tarball seems to trigger it. My tests has been as above. Output from all of the above + zfs list, zfs get all, zfs userspace, ls -l and zdb -vvv is at: http://www.acc.umu.se/~stric/tmp/zfs-userquota.txt /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Tomas ?gren wrote:> On a related note, there is a way to still have quota used even after > all files are removed, S10u8/SPARC:In this case there are two directories that have not actually been removed. They have been removed from the namespace, but they are still open, eg due to some process''s working directory being in them. This is confirmed by your zdb output, there are 2 directories on the delete queue. You can force it to be flushed by unmounting and re-mounting your filesystem. --matt
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:> Tomas ?gren wrote: >> On a related note, there is a way to still have quota used even after >> all files are removed, S10u8/SPARC: > > In this case there are two directories that have not actually been > removed. They have been removed from the namespace, but they are still > open, eg due to some process''s working directory being in them.Only a few processes in total were involved in this dir.. cd into the fs, untar the tarball, remove it all, cd out, run sync. Quota usage still remains.> This is confirmed by your zdb output, there are 2 directories on the > delete queue. You can force it to be flushed by unmounting and > re-mounting your filesystem... which isn''t such a good workaround for a busy home directory server which I will use this in shortly... I have to say a big thank you for this userquota anyway, because I tried the "one fs per user" way first, and it just didn''t scale to our 3-4000 users, but I still want to use ZFS. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Tomas ?gren wrote:> On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes: > >> Tomas ?gren wrote: >>> On a related note, there is a way to still have quota used even after >>> all files are removed, S10u8/SPARC: >> In this case there are two directories that have not actually been >> removed. They have been removed from the namespace, but they are still >> open, eg due to some process''s working directory being in them. > > Only a few processes in total were involved in this dir.. cd into the > fs, untar the tarball, remove it all, cd out, run sync. Quota usage > still remains. > >> This is confirmed by your zdb output, there are 2 directories on the >> delete queue. You can force it to be flushed by unmounting and >> re-mounting your filesystem. > > .. which isn''t such a good workaround for a busy home directory server > which I will use this in shortly...Mark Shellenbaum provides some additional details, and a simpler workaround: This is a well known problem with negative dnlc (Directory Name Lookup Cache) entries on the directory. The problem affects both zfs and ufs, and is covered by bugs 6400251 and 6179228, which are being worked on. You don''t necessarily have to unmount the file system to get it to flush the dnlc and recover the space. All you need to do is cd to the root directory of the file system and do a zfs umount <dataset>. That will fail, but a side effect is that the vfs layer will have purged the dnlc for that file system. That should cause the files to be deleted. --matt