Hi, So recently, i decided to test out some of the ideas i''ve been toying with, and decided to create 50 000 and 100 000 filesystems, the test machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a scsi 3310 raid array, via two scsi controllers. Now creating the mass of filesystems, and the mass of properties i randomly assigned them was pretty easy, and i must say, i LOVE zfs, i really do LOVE zfs ! the script i created, basically created /data/clients/<clientID>, and then randomly set a quota, as well as randomly decided if compression was to be on, basically just to set properties for it, and such. clientID is a numeric value which starts at 000000001 and continues upwards. Now, creating, i was quite surprised to see the ammount of IO generated on the array''s managment console, but never the less it created them without a hitch, although it took a little while, in the real world one wouldn''t create 100 000 filesystems over night, and even if one did, one could wait an hour, or two... The problem came in when, i had to reboot the machine, and well... yes, a few hours later, it came up :) So this got me thinking, ZFS makes a perfect solution for massive user directory type solutions, and gives you the ability to have quota''s and such stored on the filesystem, and then export the root filesystem, alas, some systems have thousands, if not hundreds of thousands of users, where that would be an awesome solution, mounting ALL of those filesystems on boot, becomes a pain. So ... how about an automounter? Is this even possible? Does it exist ? Heeeeelllpppp!! Patrick -- Patrick ---------------------------------------- patrick <at> eefy <dot> net
Patrick wrote:> Hi, > > So recently, i decided to test out some of the ideas i''ve been toying > with, and decided to create 50 000 and 100 000 filesystems, the test > machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a > scsi 3310 raid array, via two scsi controllers. > > Now creating the mass of filesystems, and the mass of properties i > randomly assigned them was pretty easy, and i must say, i LOVE zfs, i > really do LOVE zfs ! > > the script i created, basically created /data/clients/<clientID>, and > then randomly set a quota, as well as randomly decided if compression > was to be on, basically just to set properties for it, and such. > clientID is a numeric value which starts at 000000001 and continues > upwards. > > Now, creating, i was quite surprised to see the ammount of IO > generated on the array''s managment console, but never the less it > created them without a hitch, although it took a little while, in the > real world one wouldn''t create 100 000 filesystems over night, and > even if one did, one could wait an hour, or two... > > The problem came in when, i had to reboot the machine, and well... > yes, a few hours later, it came up :) > > So this got me thinking, ZFS makes a perfect solution for massive user > directory type solutions, and gives you the ability to have quota''s > and such stored on the filesystem, and then export the root > filesystem, alas, some systems have thousands, if not hundreds of > thousands of users, where that would be an awesome solution, mounting > ALL of those filesystems on boot, becomes a pain. > > So ... how about an automounter? Is this even possible? Does it exist ? > > Heeeeelllpppp!! > > Patrick >*sigh*, one of the issues we recognized, when we introduced the new cheap/fast file system creation, was that this new model would stress the scalability (or lack thereof) of other parts of the operating system. This is a prime example. I think the notion of an automount option for zfs directories is an excellent one. Solaris does support automount, and it should be possible, by setting the mountpoint property to "legacy", to set up automount tables to achieve what you want now; but it would be nice if zfs had a property to do this for you automatically. -Mark
Hi,> *sigh*, one of the issues we recognized, when we introduced the new > cheap/fast file system creation, was that this new model would stress > the scalability (or lack thereof) of other parts of the operating > system. This is a prime example. I think the notion of an automount > option for zfs directories is an excellent one. Solaris does support > automount, and it should be possible, by setting the mountpoint property > to "legacy", to set up automount tables to achieve what you want now; > but it would be nice if zfs had a property to do this for you > automatically.In my mind, somthing like going : zfs set automounter=on|off would then allow it to see, that someone attempted to access the filesystem and then mount the according filesystem, that would allow you to NFS mount <fs>/data and have data/0[1-9] for example automatically mounted on use. Or at least, that''s how i''d have thought it''d be a good idea ;) heheh P
> So recently, i decided to test out some of the ideas i''ve been toying > with, and decided to create 50 000 and 100 000 filesystems, the test > machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a > scsi 3310 raid array, via two scsi controllers.I did a similar test a couple of months ago, albeit on a smaller system, and ''only'' 10,000 users. I saw a similar delay at boot time, but also saw a large amount of memory utilisation.> So ... how about an automounter? Is this even possible? Does > it exist ?Around the same time, Casper Dik mentioned the possibility of automounting zfs datasets, as well as the possibility of cool stuff like *creating* zfs datasets with the automounter. One thing that hasn''t been touched on is how one would back up a system when some (or most) filesystems are unmounted most of the time. Is is possible to make a backup and/or take a snapshot of an unmounted dataset (and if not, is that a future possibility)? Steve.
Hey,> I did a similar test a couple of months ago, albeit on a smaller system, > and ''only'' 10,000 users. I saw a similar delay at boot time, but also > saw a large amount of memory utilisation.I didn''t notice the major memory usage, but the box had no other use, than to mount these mass of empty file systems. But I might''ve missed it, the I/O to mount them was HEAVY!> Around the same time, Casper Dik mentioned the possibility of > automounting zfs datasets, as well as the possibility of cool stuff like > *creating* zfs datasets with the automounter.I suppose i could do that, but that feels a bit icky, sorry for the lack of eloquent wording, but that''s pretty much how it''d feel, i suppose, one could create /data/sub/ and tell it not to mount the sub piece, and create the automounter to control the sub piece. ( and any file systems sub that )> One thing that hasn''t been touched on is how one would back up a system > when some (or most) filesystems are unmounted most of the time.Well technically, as far as i understand it, surely when the backup software accessed the device, it''d mount it, so as it went along, it''d mount the filesystems, and then for the `zfs send` thing, technically, that should be possible without the filesystem being mounted, same with snapshots, although i''d shudder to think what a snapshot per filesystem for backup would be like. But that said, i''ve got another quesiton, is it possible to recursively send a pool ? ( with zfs send ? )> Is is possible to make a backup and/or take a snapshot of an unmounted > dataset (and if not, is that a future possibility)?I believe it''s possible to snapshot unmounted devices, as well as zfs send them, but i could be wrong, although i really don''t think that it should make that much of a difference. As far as i remember, ZFS snapshot/send/etc... access the device, not the filesystem. P -- Patrick ---------------------------------------- patrick <at> eefy <dot> net
On Wed, Sep 27, 2006 at 08:55:48AM -0600, Mark Maybee wrote:> Patrick wrote: > >So ... how about an automounter? Is this even possible? Does it exist ? > > *sigh*, one of the issues we recognized, when we introduced the new > cheap/fast file system creation, was that this new model would stress > the scalability (or lack thereof) of other parts of the operating > system. This is a prime example. I think the notion of an automount > option for zfs directories is an excellent one. Solaris does support > automount, and it should be possible, by setting the mountpoint property > to "legacy", to set up automount tables to achieve what you want now; > but it would be nice if zfs had a property to do this for you > automatically.Perhaps ZFS could write a cache on shutdown that could be used to speed up mounting on startup by avoiding all that I/O? Sounds difficult; if the cache is ever wrong there has to be some way to recover. Alternatively, it''d be neat if ZFS could do the automounting of ZFS filesystems mounted on ZFS filesystems as needed and without autofs. It''d have to work server-side (i.e., when the trigger comes from NFS). And because of the MOUNT protocol ZFS would still have to keep a cache of the whole hierarchy so that the MOUNT protocol can serve it without everything having to be mounted (and also so ''zfs list'' can show what''s there even if not yet mounted). Nico --