I wonder if there is a possibility to use user and group quota in zfs. (like in vxfs) Actually I run into a little problem. I want to create user homes on a zfs pool where each user is limited by a quota. # zpool create data mirror c1t2d0 c1t3d0 # zfs create data/home as I could not find a hint for how to use "user quota" and "group quota" I generated little datasets # zfs create data/home/ann # zfs set quota=2G data/home/ann # zfs create data/home/bob # zfs set quota=2G data/home/bob now I want to backup the whole data as I used to (first make a snapshot, then archive the snapshot with the backup software) # zfs snapshot data at backup now I got following problem # cd /data/.zfs/snapshot/backup/ # ls home # cd home # ls # It makes kind of sense to me, that I can not see the contents of home, since they are different "data sets". Anyway, is there a possibility to make a recursive snapshot? Is there a way to implement "user" and "group" quotas? Either way (or best both) would solve my issue thx This message posted from opensolaris.org
Is there a way to implement user and group quota? Did I just not recognize the commands for it? This message posted from opensolaris.org
> Is there a way to implement user and group quota?Yes -- but not in the traditional sense. With UFS you generally have just a few filesystems. The user id is the only administrative entity to which you can apply quotas. With ZFS you generally have lots of filesystems -- one per logical entity that you''re administering. That logical entity can be a user, a project, a business unit, a zone -- whatever you want. For example, at Sun we have servers set up so that each user''s home directory is a separate filesystem. This turns out to be very convenient administratively. Want to know who''s using all the space? Just run df(1M) -- no super-expensive du(1) required. Similarly, you can have per-user snapshots, backups, and various other properties -- including, of course, quotas and reservations. ZFS filesystems and their properties are hierarchical, so the quota for a given filesystem also applies to the sum of all its children. This means you can have arbitrarily nested administrative domains -- not just user and group -- to represent whatever policy you want. Mark Maybee describes the possibilities in considerable detail here: http://blogs.sun.com/roller/page/markm?entry=filesystem_quotas_and_reservations_ on It''s a different way of thinking about your data. Jeff
Gerald Griessner
2005-Dec-01 06:26 UTC
[zfs-discuss] Re: Re: zfs snapshot for backup, Quota
The new way of applying quota is indeed pretty cool, ... but if I make a "dataset" for home and a dataset for each user (e.g. ann, bob), I run into a problem when backing up. Currently I make a snapshot in vxfs of /home, mount it to /snap and back up the snapshot. How can I make a snapshot of home in zfs containing the data including the stuff within the user homes (home/ann, home/bob) - like a recursive snapshot. The only way so far I could think of was - copy the directory structure (home/ann, home/bob) to /snap - initiate a snapshot of every dataset (home/ann, home/bob) - mount each snapshot to the counterpart under /snap - run the backup - remove the mounts - release the snapshots - clear /snap If there is something like a recursive snapshot or user and group quota in the classical sense, the efford needed could be minimized, ... Cheers Gerald This message posted from opensolaris.org
A short scipt and a bit of abuse of the automounter can do this. I have put it on my blog here: http://blogs.sun.com/roller/page/chrisg?entry=zfs_snapshots_meet_automounter_and --chris This message posted from opensolaris.org
I have a similar script to the one in your blog that I have been playing around with. It will use the "zfs clone" command to remount the snapshot''s in a different directory for backup. Sample fs layout: # zfs list NAME USED AVAIL REFER MOUNTPOINT test 253K 476M 16K /test test/backup 16K 476M 16K /backup test/home 50.0K 476M 18.0K /export/home test/home/ann 16K 476M 16K /export/home/ann test/home/bob 16K 476M 16K /export/home/bob Snapshot script: #!/bin/sh -x zfs create test/backup/$1 for fs in `zfs list -H -o name -t filesystem | grep home/ | grep -v @| cut -d/ -f3` do zfs snapshot test/home/$fs@$1 zfs clone test/home/$fs@$1 test/backup/$1/$fs done exit 0 Snapshot remove script: #!/bin/sh -x for fs in `zfs list -H -o name -t snapshot | grep $1` do zfs destroy -R $fs done zfs destroy test/backup/$1 exit 0 This message posted from opensolaris.org
Sorry to revive such an old thread.. but I''m struggling here. I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I work for a University, where everyone has a quota. I''d literally have to create > 10K partitions. Is that really your intention? Of course, backups become a huge pain now. Even with the scripted idea below, that''s cumbersome for both backups and (especially) restores. Why can''t we just have user quotas in zfs? :) Respectfully, -Charlie This message posted from opensolaris.org
On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote:> Sorry to revive such an old thread.. but I''m struggling here. > > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I > work for a University, where everyone has a quota. I''d literally have > to create > 10K partitions. Is that really your intention?Yes. You''d group them all under a single filesystem in the hierarchy, allowing you to manage NFS share options, compression, and more from a single control point.> Of course, backups become a huge pain now. below, that''s cumbersome > for both backups and (especially) restores.Using traditional tools or ZFS send/receive? We are working on RFEs for recursive snapshots, send, and recv, as well as preserving DSL properties as part of a ''send'', which should make backups of large filesystem hierarchies much simpler.> Why can''t we just have user quotas in zfs? :)The fact that per-user quotas exist is really a historical artifact. With traditional filesystems, it is (effectively) impossible to have a filesystem per user. The filesystem is a logical administrative control point, allowing you to view usage, control properties, perform backups, take snapshots etc. For home directory servers, you really want to do these operations per-user, so logically you''d want to equate the two (filesystem = user). Per-user quotas (the most common use of quotas, but not the only one) were introduced because multiple users had to share the same filesystem. ZFS quotas are intentionally not associated with a particular user because a) it''s the logical extension of "filesystems as control point", b) it''s vastly simpler to implement and, most importantly, c) separates implementation from adminsitrative policy. ZFS quotas can be set on filesystems which may represent projects, groups, or any other abstraction, as well as on entire portions of the hierarchy. This allows them to be combined in ways that traditional per-user quotas cannot. Hope that helps, - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
> Why can''t we just have user quotas in zfs? :)+1 to that. I support a couple environments with group/user quotas that cannot move to ZFS since they serve brain-dead apps that read/write from a single directory. I also agree that using even a few hundred mountpoints is more tedious than using quotas, but I can get used to that...I just won''t use df as often. :) This message posted from opensolaris.org
Eric Schrock wrote:> > On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote: >> >> to create > 10K partitions. Is that really your intention? > > > > Yes. You''d group them all under a single filesystem in the hierarchy, > > allowing you to manage NFS share options, compression, and more from a > > single control point.This isn''t so bad. I''m going to assume that mounting 10K partitions at boot doesn''t take forever. :)>> >> Of course, backups become a huge pain now. ... that''s cumbersome >> >> for both backups and (especially) restores. > > > > Using traditional tools or ZFS send/receive?Traditional (amanda). I''m not seeing a way to dump zfs file systems to tape without resorting to ''zfs send'' being piped through gtar or something. Even then, the only thing I could restore was an entire file system. (We frequently restore single files for users...) Perhaps, since zfs isn''t limited to one snapshot per FS like fssnap is, I should be redesigning everything. It sounds like I should look at using many snapshots, and dumping to tape (each file system, somehow) less frequently. Waiting for S10_U2 now :)> > Hope that helps, > > > > - EricIt does. Thanks! -Charlie This message posted from opensolaris.org
On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote:> On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote: > > Sorry to revive such an old thread.. but I''m struggling here. > > > > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I > > work for a University, where everyone has a quota. I''d literally have > > to create > 10K partitions. Is that really your intention? > > Yes. You''d group them all under a single filesystem in the hierarchy, > allowing you to manage NFS share options, compression, and more from a > single control point. >I''d agree except for backups. If the pools are going to grow beyond a reasonable-to-backup and reasonable-to-restore threshold (measured by the backup window), it would be practical to break it into smaller pools. After all, you''ll probably have to restore a pool eventually. If that will take a week, your users won''t be very happy with your solution.> > Of course, backups become a huge pain now. below, that''s cumbersome > > for both backups and (especially) restores. > > Using traditional tools or ZFS send/receive? We are working on RFEs for > recursive snapshots, send, and recv, as well as preserving DSL > properties as part of a ''send'', which should make backups of large > filesystem hierarchies much simpler. >Using EBS or NetBackup, can I get a single file back from tape only through the backup system? That''s a big factor for production environments. Also, when users request a restore from tape from offsite backups, they''ll usually specify a date range for when the file was ''good''. To accomplish that, you need to use the backup solution to find the requisite file. These ''fishing expeditions'' (as I call them) can take a lot of time if direct access isn''t available via the backup tool. I believe you''re referring in the above to using zfs send/recv for backup to tape. Until the vendors work with zfs send/recv, it''s not a viable option for filesystem backups in a production environment. Related to that, does anybody have a timeframe for direct support for ZFS send/recv (or something similar) in NBU or EBS? [ quota explanation deleted for brevity ]> > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nicolas Williams
2006-May-18 20:36 UTC
[zfs-discuss] Re: Re: zfs snapshot for backup, Quota
On Thu, May 18, 2006 at 02:23:55PM -0600, Gregory Shaw wrote:> I''d agree except for backups. If the pools are going to grow beyond a > reasonable-to-backup and reasonable-to-restore threshold (measured by > the backup window), it would be practical to break it into smaller > pools.Speaking of backups, and particularly when we get to recursive ones, I''d like control over the filesystem and snapshot names restored, as well as control over what snapshots should appear, overrides for properties (considering the RFE to have property setting on zfs create). Currently I think one can specify the fs name to restore, yes, I know. Also, the recursive backup output will need a ToC and a tool to list it. Nico --
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:> Traditional (amanda). I''m not seeing a way to dump zfs file systems to > tape without resorting to ''zfs send'' being piped through gtar or > something. Even then, the only thing I could restore was an entire file > system. (We frequently restore single files for users...) > > Perhaps, since zfs isn''t limited to one snapshot per FS like fssnap is, > I should be redesigning everything. It sounds like I should look at > using many snapshots, and dumping to tape (each file system, somehow) > less frequently.That''s right. With ZFS, there should never be a need to go to tape to recover an accidentally deleted file, becuase it''s easy[*] to keep lots of snapshots around. [*] Well, modulo 6373978 "want to take lots of snapshots quickly (''zfs snapshot -r'')". I''m working on that... --matt
On 5/18/06, Gregory Shaw <Greg.Shaw at sun.com> wrote:> On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote: > > On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote: > > > Sorry to revive such an old thread.. but I''m struggling here. > > > > > > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I > > > work for a University, where everyone has a quota. I''d literally have > > > to create > 10K partitions. Is that really your intention? > > > > Yes. You''d group them all under a single filesystem in the hierarchy, > > allowing you to manage NFS share options, compression, and more from a > > single control point. > > > > I''d agree except for backups. If the pools are going to grow beyond a > reasonable-to-backup and reasonable-to-restore threshold (measured by > the backup window), it would be practical to break it into smaller > pools. > > After all, you''ll probably have to restore a pool eventually. If that > will take a week, your users won''t be very happy with your solution. > > > > Of course, backups become a huge pain now. below, that''s cumbersome > > > for both backups and (especially) restores. > > > > Using traditional tools or ZFS send/receive? We are working on RFEs for > > recursive snapshots, send, and recv, as well as preserving DSL > > properties as part of a ''send'', which should make backups of large > > filesystem hierarchies much simpler. > > > > Using EBS or NetBackup, can I get a single file back from tape only > through the backup system? That''s a big factor for production > environments. Also, when users request a restore from tape from offsite > backups, they''ll usually specify a date range for when the file was > ''good''. To accomplish that, you need to use the backup solution to find > the requisite file. These ''fishing expeditions'' (as I call them) can > take a lot of time if direct access isn''t available via the backup tool.ZFS basically eliminates the need to single file restores, because it has snapshots, then the user can have almost instant access to old copies of files. and is a lot quicker than even the fastest tape library. Just make daily snapshots and the need to restore a single file from tape is almost completely eliminated, you can still use netbackup for disasters, but to get access to a single old file a snapshot is much easier. You can also make it possible to have users initiate there own snapshots when they feel the need arises. James Dickens uadmin.blogspot.com> > I believe you''re referring in the above to using zfs send/recv for > backup to tape. Until the vendors work with zfs send/recv, it''s not a > viable option for filesystem backups in a production environment. > > Related to that, does anybody have a timeframe for direct support for > ZFS send/recv (or something similar) in NBU or EBS? > > [ quota explanation deleted for brevity ] > > > > -- > > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:> Eric Schrock wrote: > > > Using traditional tools or ZFS send/receive? > > Traditional (amanda). I''m not seeing a way to dump zfs file systems to > tape without resorting to ''zfs send'' being piped through gtar or > something. Even then, the only thing I could restore was an entire file > system. (We frequently restore single files for users...)Remember, ZFS is a fully POSIX-compliant filesystem. Any backup program that uses system calls to do its work will still function properly. Why would you believe that your backup program doesn''t work with ZFS? Have you actually tried it? If it doesn''t work, that''s a big bug for us.> Perhaps, since zfs isn''t limited to one snapshot per FS like fssnap is, > I should be redesigning everything. It sounds like I should look at > using many snapshots, and dumping to tape (each file system, somehow) > less frequently.That''s definitely an option. You can also tell your backup program to not stop at filesystem boundaries so you can do entire trees of your namespace at once. --Bill
Bill Moore wrote:> On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote: >> Eric Schrock wrote: >>>> Using traditional tools or ZFS send/receive? >> Traditional (amanda). I''m not seeing a way to dump zfs file systems to >> tape without resorting to ''zfs send'' being piped through gtar or >> something. Even then, the only thing I could restore was an entire file >> system. (We frequently restore single files for users...) > > Remember, ZFS is a fully POSIX-compliant filesystem. Any backup program > that uses system calls to do its work will still function properly. Why > would you believe that your backup program doesn''t work with ZFS? Have > you actually tried it? If it doesn''t work, that''s a big bug for us.Of course, using system calls isn''t an issue. Most backup systems funtion at a higher level than read() however :) I was thinking about amanda specifically, and I''d need zfsdump to do that. The result is thus: If I want incrementals, I must tell amanda to use tar. Using ''dump'' is preferred for many reasons. And ''zfs send'' is neat, but only mildly useful. -Charlie This message posted from opensolaris.org
On Thu, 2006-05-18 at 16:43 -0500, James Dickens wrote:> On 5/18/06, Gregory Shaw <Greg.Shaw at sun.com> wrote: > > On Thu, 2006-05-18 at 12:12 -0700, Eric Schrock wrote: > > > On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote: > > > > Sorry to revive such an old thread.. but I''m struggling here. > > > > > > > > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I > > > > work for a University, where everyone has a quota. I''d literally have > > > > to create > 10K partitions. Is that really your intention? > > > > > > Yes. You''d group them all under a single filesystem in the hierarchy, > > > allowing you to manage NFS share options, compression, and more from a > > > single control point. > > > > > > > I''d agree except for backups. If the pools are going to grow beyond a > > reasonable-to-backup and reasonable-to-restore threshold (measured by > > the backup window), it would be practical to break it into smaller > > pools. > > > > After all, you''ll probably have to restore a pool eventually. If that > > will take a week, your users won''t be very happy with your solution. > > > > > > Of course, backups become a huge pain now. below, that''s cumbersome > > > > for both backups and (especially) restores. > > > > > > Using traditional tools or ZFS send/receive? We are working on RFEs for > > > recursive snapshots, send, and recv, as well as preserving DSL > > > properties as part of a ''send'', which should make backups of large > > > filesystem hierarchies much simpler. > > > > > > > Using EBS or NetBackup, can I get a single file back from tape only > > through the backup system? That''s a big factor for production > > environments. Also, when users request a restore from tape from offsite > > backups, they''ll usually specify a date range for when the file was > > ''good''. To accomplish that, you need to use the backup solution to find > > the requisite file. These ''fishing expeditions'' (as I call them) can > > take a lot of time if direct access isn''t available via the backup tool. > > ZFS basically eliminates the need to single file restores, because it > has snapshots, then the user can have almost instant access to old > copies of files. and is a lot quicker than even the fastest tape > library. Just make daily snapshots and the need to restore a single > file from tape is almost completely eliminated, you can still use > netbackup for disasters, but to get access to a single old file a > snapshot is much easier. > > You can also make it possible to have users initiate there own > snapshots when they feel the need arises. > > James Dickens > uadmin.blogspot.com >The above would be fine for testing. However, on an active filesystem that is more than 50% full, you''ll find that large amounts of space will be used by the snapshots. We currently use a pair of the Bluearc Titan fileserver appliances. They have very similar snapshot functionality. Currently, we can''t maintain more than about 3 days of snapshots every 4 hours due to space constraints. For filesystems that don''t move much, 1 snapshot per day for a year may be practical. I doubt it, as snapshots have to be managed, and maintaining 365 snapshots per filesystem (not pool) will be very difficult. [ stuff deleted ]
On the topic of ZFS snapshots: does the snapshot just capture the changed _blocks_, or does it effectively copy the entire file if any block has changed? That is, assuming that the snapshot (destination) stays inside the same pool space. -Erik
Nicolas Williams
2006-May-18 22:50 UTC
[zfs-discuss] Re: Re: zfs snapshot for backup, Quota
On Thu, May 18, 2006 at 03:41:13PM -0700, Erik Trimble wrote:> On the topic of ZFS snapshots: > > does the snapshot just capture the changed _blocks_, or does it > effectively copy the entire file if any block has changed?Incremental sends capture changed blocks. Snapshots capture all of the FS state as of the time the snapshot is taken, though it does so in constant time. Subsequent changes are kept as changed blocks, as deltas to the snapshot in the filesystem and clones.> That is, assuming that the snapshot (destination) stays inside the same > pool space.Of course it does. Er, what do you mean by ''destination''?
Nathan Kroenert
2006-May-19 00:18 UTC
[zfs-discuss] Re: Re: zfs snapshot for backup, Quota
Just piqued my interest on this one - How would we enforce quotas of sorts in large filesystems that are shared? I can see times when I might want lots of users to use the same directory (and thus, same filesystem) but still want to limit the amount of space each user can consume. Thoughts? Nathan. :) On Fri, 2006-05-19 at 05:12, Eric Schrock wrote:> On Thu, May 18, 2006 at 11:42:58AM -0700, Charlie wrote: > > Sorry to revive such an old thread.. but I''m struggling here. > > > > I really want to use zfs. Fssnap, SVM, etc all have drawbacks. But I > > work for a University, where everyone has a quota. I''d literally have > > to create > 10K partitions. Is that really your intention? > > Yes. You''d group them all under a single filesystem in the hierarchy, > allowing you to manage NFS share options, compression, and more from a > single control point. > > > Of course, backups become a huge pain now. below, that''s cumbersome > > for both backups and (especially) restores. > > Using traditional tools or ZFS send/receive? We are working on RFEs for > recursive snapshots, send, and recv, as well as preserving DSL > properties as part of a ''send'', which should make backups of large > filesystem hierarchies much simpler. > > > Why can''t we just have user quotas in zfs? :) > > The fact that per-user quotas exist is really a historical artifact. > With traditional filesystems, it is (effectively) impossible to have a > filesystem per user. The filesystem is a logical administrative control > point, allowing you to view usage, control properties, perform backups, > take snapshots etc. For home directory servers, you really want to do > these operations per-user, so logically you''d want to equate the two > (filesystem = user). Per-user quotas (the most common use of quotas, > but not the only one) were introduced because multiple users had to > share the same filesystem. > > ZFS quotas are intentionally not associated with a particular user > because a) it''s the logical extension of "filesystems as control point", > b) it''s vastly simpler to implement and, most importantly, c) separates > implementation from adminsitrative policy. ZFS quotas can be set on > filesystems which may represent projects, groups, or any other > abstraction, as well as on entire portions of the hierarchy. This allows > them to be combined in ways that traditional per-user quotas cannot. > > Hope that helps, > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- ////////////////////////////////////////////////////////////////// // Nathan Kroenert nathan.kroenert at sun.com // // PTS Engineer Phone: +61 2 9844-5235 // // Sun Services Direct Ext: x57235 // // Level 2, 828 Pacific Hwy Fax: +61 2 9844-5311 // // Gordon 2072 New South Wales Australia // //////////////////////////////////////////////////////////////////
On Fri, 2006-05-19 at 10:18 +1000, Nathan Kroenert wrote:> Just piqued my interest on this one - > > How would we enforce quotas of sorts in large filesystems that are > shared? I can see times when I might want lots of users to use the same > directory (and thus, same filesystem) but still want to limit the amount > of space each user can consume. > > Thoughts?rats.> Nathan. :)OK :-) I''ve been wondering if I should mention this here, but I went ahead and blogged about it anyway. http://blogs.sun.com/roller/page/relling?entry=i_m_tired_of_owning Anyone who is really clever will easily get past a quota, especially at a university -- triple that probability for an engineering college. What it really boils down to is 2 things: 1. denial of service -- how to protect others from disk-hogs 2. contractual obligations -- how to charge the government (in the US anyway) for space used for government sponsored research... and pass the audit. A few years ago there was a 3rd thing: 3. how to pay for the disk space. Today, disk space is cheap. Really. All of the current college students I know carry around USB flash drives with all of their stuff on it. And iPods. If I were a college student, why would I risk my stuff being stored on the campus servers where "the man" might want to go snooping? Or, if you don''t really care, use flickr, myspace, godaddy, gmail, or some other such storage service. Storage space really is becoming inexpensive. I''m not sure anybody can fix #2, but #1 can be accomplished within reason without resorting to user quotas. -- richard
Darren J Moffat
2006-May-19 08:36 UTC
[zfs-discuss] Re: Re: zfs snapshot for backup, Quota
Richard Elling wrote:> Anyone who is really clever will easily get past a quota, especially > at a university -- triple that probability for an engineering college.I studied Computing Science at Glasgow University (Scotland) the department policy was NOT to use disk quotas. This was on SunOS 4.x so it was possible. What they did instead was used a separate filesystem (actually NFS server but thats not so relevant here) for each year of students plus one more for staff and postgrads. Each student year filesystem had a shared area that was world writable and a home dir for every student. How did we manage diskspace hogs ? Peer pressure, once things got above about 70% or so the admins would send out weekly reports on who was hogging diskspace. On the other hand we DID have a printer quota system that limited how much use we could make of the laser printers because that did cost money. Of course we found various ways around about that! -- Darren J Moffat