Hello Is there a way to limit size of filesystem not including snapshots? Or even better size of data on filesystem regardless of compression. If not is it planned? It is hard to explain to user that it is normal that after deleting his files he did not receive more space. Even harder to ask to use not more than half of given quota. This message posted from opensolaris.org
zfs-discuss-bounces at opensolaris.org wrote on 08/07/2007 10:53:28 AM:> Hello > > Is there a way to limit size of filesystem not including snapshots? > Or even better size of data on filesystem regardless of compression. > If not is it planned? > It is hard to explain to user that it is normal that after deleting > his files he did not receive more space. Even harder to ask to use > not more than half of given quota. >Even harder to explain the quota is not for her files -- but for all user files on the filesystem, snapshots etc. Snapshots and other administrative level disk usage should at least have a flag to account/not account on filesystem quotas. It would be very nice to have user quotas again too -- sometimes it is just not feasible to segment filesystems into quota blocks when you are trying to quota users and the users overlap on the filesystem structure. Consider /data/departments/HR where it is a locked down filesystem or directory for 20 HR employees. You want to limit the disk usage per employee to 50gb just so they do not dump out their itunes library to the folder because they can. With ufs/vxfs you can have /data/department/HR be a filesystem with user quotas -- under ZFS you can only quota the filesystem all or nothing. -Wade
On Tue, Aug 07, 2007 at 08:53:28AM -0700, Dmitry wrote:> Hello > > Is there a way to limit size of filesystem not including snapshots? > Or even better size of data on filesystem regardless of compression. > If not is it planned? > It is hard to explain to user that it is normal that after deleting > his files he did not receive more space. Even harder to ask to use not > more than half of given quota.You want: 6431277 want filesystem-only quotas This is planned, but there are currently some higher-priority items. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Just wanted to voice another request for this feature. I was forced on a previous Solaris10/ZFS system to rsync whole filesystems, and snapshot the backup copy to prevent the snapshots from negatively impacting users. This obviously has the effect of reducing available space on the system by over half. It also robs you of lots of I/O bandwidth while all that data is rsyncing, and means that users can''t see their snapshots, only a sysadmin with access to the backup copy can. We''ve got a new system that isn''t doing the rsync, and users very quickly discovered over-quota problems when their directories appeared empty, and deleting files didn''t help. They required sysadmin intervention to increase their filesystem quotas to accomodate the snapshots and their real data. Trying to anticipate the space required for the snapshots and giving them that as a quota is more or less hopeless, plus it gives them that much more rope with which to hang themselves with massive snapshots. I hate to start rsyncing again, but may be forced to; policing the snapshot space consumption is getting painful, but the online snapshot feature is too valuable to discard altogether. or if there are other creative solutions, I''m all ears... This message posted from opensolaris.org
Brad Plecs wrote:> I hate to start rsyncing again, but may be forced to; policing the snapshot space consumption is > getting painful, but the online snapshot feature is too valuable to discard altogether. > > or if there are other creative solutions, I''m all ears...OK, you asked for "creative" workarounds... here''s one (though it requires that the filesystem be briefly unmounted, which may be deal-killing): zfs create pool/realfs zfs set quota=1g pool/realfs again: zfs umount pool/realfs zfs rename pool/realfs pool/oldfs zfs snapshot pool/oldfs@$now zfs clone pool/oldfs@$now pool/realfs zfs set quota=1g pool/realfs (6364688 would be useful here) zfs set quota=none pool/oldfs zfs promote pool/oldfs zfs destroy pool/backupfs zfs rename pool/oldfs pool/backupfs backup pool/backupfs@$now sleep $backupinterval goto again FYI, we are working on "fs-only" quotas. --matt
> OK, you asked for "creative" workarounds... here''s one (though it requires > that the filesystem be briefly unmounted, which may be deal-killing):That is, indeed, creative. :) And yes, the unmount make it impractical in my environment. I ended up going back to rsync, because we had more and more complaints as the snapshots accumulated, but am now just rsyncing to another system, which in turn runs snapshots on the backup copy. It''s still time- and i/o-consuming, and the users can''t recover their own files, but at least I''m not eating up 200% of the space otherwise necessary on the expensive new hardware raid and fielding daily over-quota (when not really over-quota) complaints. Thanks for the suggestion. Looking forward to the new feature... BP> > zfs create pool/realfs > zfs set quota=1g pool/realfs > > again: > zfs umount pool/realfs > zfs rename pool/realfs pool/oldfs > zfs snapshot pool/oldfs@$now > zfs clone pool/oldfs@$now pool/realfs > zfs set quota=1g pool/realfs (6364688 would be useful here) > zfs set quota=none pool/oldfs > zfs promote pool/oldfs > zfs destroy pool/backupfs > zfs rename pool/oldfs pool/backupfs > backup pool/backupfs@$now > sleep $backupinterval > goto again > > FYI, we are working on "fs-only" quotas. > > --matt-- bplecs at cs.umd.edu
Hello Brad, Monday, August 27, 2007, 3:47:47 PM, you wrote:>> OK, you asked for "creative" workarounds... here''s one (though it requires >> that the filesystem be briefly unmounted, which may be deal-killing):BP> That is, indeed, creative. :) And yes, the unmount make it BP> impractical in my environment. BP> I ended up going back to rsync, because we had more and more BP> complaints as the snapshots accumulated, but am now just rsyncing to BP> another system, which in turn runs snapshots on the backup copy. It''s BP> still time- and i/o-consuming, and the users can''t recover their own BP> files, but at least I''m not eating up 200% of the space otherwise BP> necessary on the expensive new hardware raid and fielding daily BP> over-quota (when not really over-quota) complaints. BP> Thanks for the suggestion. Looking forward to the new feature... Instead of rsync you could try to send incrementals using zfs send. If you have a lot of files it should be much quicker (issuing less # of IO). -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brad Plecs wrote:> I ended up going back to rsync, because we had more and more > complaints as the snapshots accumulated, but am now just rsyncing to > another system, which in turn runs snapshots on the backup copy. It''s > still time- and i/o-consuming, and the users can''t recover their own > files, but at least I''m not eating up 200% of the space otherwise > necessary on the expensive new hardware raid and fielding daily > over-quota (when not really over-quota) complaints.You could keep a couple of snapshots arount and use them to "zfs send | zfs receive". "zfs send" if far more efficient that rsync. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at argo.es http://www.argo.es/~jcea/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBRumW+Zlgi5GaxT1NAQIJkgP9H1zwILeZOGY/Ptip5N4yN/+hFQ3jjdkw c7lVu25PZxXKz/pvhBUPHQhcZW8WTrc3wRoHARok5Z5l4lILx/ZK92KYkUHegWxa EwFbtLPlVpOl+qeLo8X90CxInwH12v5PlSYhnf9dFVgw0u1HgonpGUWVYuATdWI5 QXjwJNe2gDw=XGxP -----END PGP SIGNATURE-----