As far as I can tell, zfs exports no IO kstats. Is this a deliberate design decision? (There appear to be some kmem_cache statistics, but I''m not sure they tell me all that much.) Currently, the only way I can see of getting at the IO statistics is using ''zpool iostat''. I can''t get at anything using traditional tools such as iostat (or using any other custom tools that use traditional kstats). Another thing is that - again, as far as I can tell - there''s no way to get at any IO statistics on a per-filesystem basis. And this is what I''m normally interested in and would very much like to see. (It''s true that ufs hasn''t kept any IO kstats either, but that wasn''t a problem as there was a 1:1 mapping between filesystems and either disk partitions or SVM metadevices.) -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Peter Tribble wrote On 11/27/05 09:31,:> As far as I can tell, zfs exports no IO kstats.Correct.> > Is this a deliberate design decision?Yes (see below).> > (There appear to be some kmem_cache statistics, but I''m not sure they > tell me all that much.)Agreed.> > Currently, the only way I can see of getting at the IO statistics is > using > ''zpool iostat''. I can''t get at anything using traditional tools such as > iostat > (or using any other custom tools that use traditional kstats).I''m not sure what you need here. A ''zpool iostat'' will give pool wide IO statistics, ''zpool iostat -v'' IO statistics per device, and iostat still works independently of zfs.> > Another thing is that - again, as far as I can tell - there''s no way to > get at > any IO statistics on a per-filesystem basis. And this is what I''m > normally > interested in and would very much like to see.Ah good. You will be an excellent consumer for a new ''fstat'' tool that will soon deliver per filesystem statistics. The plan is to provide statistics on both bandwidth and various fs operations, not just for zfs but across across all filesystems. I''m not sure when this will be available (Rich is working hard on this at the moment), and the interface may change so I don''t want to give too many details.> > (It''s true that ufs hasn''t kept any IO kstats either, but that wasn''t a > problem as there was a 1:1 mapping between filesystems and either disk > partitions > or SVM metadevices.) >-- Neil
Neil Perrin wrote:> > Peter Tribble wrote On 11/27/05 09:31,: >...>>Another thing is that - again, as far as I can tell - there''s no way to >>get at >>any IO statistics on a per-filesystem basis. And this is what I''m >>normally >>interested in and would very much like to see. > > > Ah good. You will be an excellent consumer for a new ''fstat'' tool that > will soon deliver per filesystem statistics. The plan is to provide > statistics on both bandwidth and various fs operations, not just for zfs > but across across all filesystems. > I''m not sure when this will be available (Rich is working hard on this > at the moment), and the interface may change so I don''t want to > give too many details. >Greetings all, ''fsstat'' is a tool that will provide file system observability for all file systems. It''s growing/evolving pretty rapidly at the moment and I need to address some internal issues before exposing this. Once those get settled, I''ll share more here. Rich
On Mon, 2005-11-28 at 16:23, Rich Brown wrote:> ''fsstat'' is a tool that will provide file system observability for all > file systems. It''s growing/evolving pretty rapidly at the moment and I > need to address some internal issues before exposing this. Once those > get settled, I''ll share more here.Sounds good. I''m looking forward to it. Is it just a standalone tool, or does it provide interfaces that could be consumed by other tools? -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
It''s a standalone tool in the spirit of vmstat, iostat, etc. Rich Peter Tribble wrote:> On Mon, 2005-11-28 at 16:23, Rich Brown wrote: > > >>''fsstat'' is a tool that will provide file system observability for all >>file systems. It''s growing/evolving pretty rapidly at the moment and I >>need to address some internal issues before exposing this. Once those >>get settled, I''ll share more here. > > > Sounds good. I''m looking forward to it. > > Is it just a standalone tool, or does it provide interfaces that could > be consumed by other tools? >
Is there a reason that this data is not to be exposed by kstat? It seems to me that Sun has a history of having many disparate systems, each of which provides a single-purpose interface. Many of these would benefit from consolidation. Kstat seemed to be going the right way in that respect. Another example is management GUIs. Do we really need yet another web- based interface for a single feature (ZFS). I''d love to see one single GUI that works (I''m looking at you SMC), rather than 30 that are independent. On 30/11/2005, at 12:13 PM, Rich Brown wrote:> It''s a standalone tool in the spirit of vmstat, iostat, etc. > > Rich > > > Peter Tribble wrote: >> On Mon, 2005-11-28 at 16:23, Rich Brown wrote: >>> ''fsstat'' is a tool that will provide file system observability >>> for all >>> file systems. It''s growing/evolving pretty rapidly at the moment >>> and I >>> need to address some internal issues before exposing this. Once >>> those >>> get settled, I''ll share more here. >> Sounds good. I''m looking forward to it. >> Is it just a standalone tool, or does it provide interfaces that >> could >> be consumed by other tools? > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss >
> Another example is management GUIs. Do we really need yet another web- > based interface for a single feature (ZFS). I''d love to see one > single GUI that works (I''m looking at you SMC), rather than 30 that > are independent.We have different GUIs for the same reason we have different commands in /usr/bin. We could have /usr/bin/doit, which would do everything, but that would just move the complexity into the option processing. Whether it''s a different GUI, a different web page, or one more level of pull-right menu, at some point the user has to specify their intent. In the end it''s a matter of taste. sccs, for example, is a single command with subcommands. I personally like that model because I think of ''sccs edit'' and ''sccs delget'' as related operations -- in a way that ''ls'' and ''grep'' are not. We took a similar approach with the zpool and zfs commands. The zpool command groups all functions related to pool management; the zfs command groups all functions related to filesystem management. Or, if you prefer: zpool manages devices, zfs manages data. You could make a reasonable case for putting zpool and zfs into a single command, or for having a separate command for each operation (e.g. /usr/sbin/zfscreate, /usr/sbin/zfslist, etc). Those are valid alternatives, but I think the zpool/zfs model we have is nicer. Again, purely an aesthetic judgment. Jeff
On 30/11/2005, at 5:44 PM, Jeff Bonwick wrote:> We have different GUIs for the same reason we have different commands > in /usr/bin. We could have /usr/bin/doit, which would do everything, > but that would just move the complexity into the option processing. > Whether it''s a different GUI, a different web page, or one more level > of pull-right menu, at some point the user has to specify their > intent.Agreed, and I''m the first to defend the toolbox philosophy, but almost all the commands in /usr/bin have some things in common, things like text output/input, a common approach to things like globbing (let the shell do it) and a a set of conventions (although fairly inconsistently followed) for command line syntax. But multiple GUI''s have none of that. They end up with different, frequently crippled, user interaction models, inconsistent terminology, multiple network ports exposed, re-implementations of similar code and multiple requests to authenticate. I don''t think that having a common gui framework that provides the common features in a consistent way can be compared to /usr/bin/ doit.. It seems the current approach is more akin to each command that wants to produce output re-implementing it''s own ptys, PAM and ssh. Of course a user at some point needs to specify intent, but to suggest that launching another GUI is functionally equivalent to choosing a new menu item seems rather a stretch. Boyd
On Wed, 2005-11-30 at 06:44, Jeff Bonwick wrote:> > Another example is management GUIs. Do we really need yet another web- > > based interface for a single feature (ZFS). I''d love to see one > > single GUI that works (I''m looking at you SMC), rather than 30 that > > are independent. > > We have different GUIs for the same reason we have different commands > in /usr/bin. We could have /usr/bin/doit, which would do everything, > but that would just move the complexity into the option processing. > Whether it''s a different GUI, a different web page, or one more level > of pull-right menu, at some point the user has to specify their intent.What we do have though is a new framework called Lockhart, this is what the ZFS GUI is written in. It is the same framework that the Web based GUI''s for most of the software in the Sun Java Enterprise System are written in. This is the replacement for SMC.> In the end it''s a matter of taste. sccs, for example, is a single > command with subcommands. I personally like that model because > I think of ''sccs edit'' and ''sccs delget'' as related operations -- > in a way that ''ls'' and ''grep'' are not. > > We took a similar approach with the zpool and zfs commands. The > zpool command groups all functions related to pool management; the > zfs command groups all functions related to filesystem management. > Or, if you prefer: zpool manages devices, zfs manages data.Note also that it isn''t just ZFS that has this command style the same thing applies in other new Solaris 10+ features: SMF: svcadm svccfg Zones: zoneadm zonecfg Crypto: cryptoadm (comming soon pktool has subcommands) and others. -- Darren J Moffat
* Boyd Adamson <boyd-adamson at usa.net> [2005-11-29 22:08]:> Is there a reason that this data is not to be exposed by kstat? It > seems to me that Sun has a history of having many disparate systems, > each of which provides a single-purpose interface. Many of these > would benefit from consolidation. Kstat seemed to be going the right > way in that respect.I''d like to come back to this point: what are the gaps in kstat(7D) that fsstat is identifying as required for its data marshalling needs, but unfixable? Or is the actual point that the initial ZFS delivery does not publish kstats because a common filesystem kstat is being developed? - Stephen
My apologies for not responding sooner. I''m travelling and I''m backed up on e-mail. Stephen: You''ve hit the nail on the head. ''fsstat'' uses kstat(7D) but these kstats aren''t being publicly advertised. There are some issues that need to be addressed before that could happen. Rich Stephen Hahn wrote:>* Boyd Adamson <boyd-adamson at usa.net> [2005-11-29 22:08]: > > >>Is there a reason that this data is not to be exposed by kstat? It >>seems to me that Sun has a history of having many disparate systems, >>each of which provides a single-purpose interface. Many of these >>would benefit from consolidation. Kstat seemed to be going the right >>way in that respect. >> >> > > I''d like to come back to this point: what are the gaps in kstat(7D) > that fsstat is identifying as required for its data marshalling needs, > but unfixable? Or is the actual point that the initial ZFS delivery > does not publish kstats because a common filesystem kstat is being > developed? > > - Stephen > > >
Peter Tribble
2005-Dec-01 14:15 UTC
zpool vs zfs [was Re: [zfs-discuss] IO statistics, kstats?]
On Wed, 2005-11-30 at 06:44, Jeff Bonwick wrote:> We took a similar approach with the zpool and zfs commands. The > zpool command groups all functions related to pool management; the > zfs command groups all functions related to filesystem management. > Or, if you prefer: zpool manages devices, zfs manages data.Well, almost. Use zpool to create a pool, and you get a filesystem thrown in. This is something I really don''t want. I want to use zpool to create the pools, and zfs to manage the filesystems. I don''t want zpool doing filesystem creation against my wishes. So, how to turn off this top-level filesystem? It''s easy enough to do, just setting the mountpoint to "none". Unfortunately, all subsequent filesystems created in that pool inherit that, so they need to be fixed up individually, which makes it undesirable. Any other way to cleanly disable that default filesystem? -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On Wed, 2005-11-30 at 10:24, Darren J Moffat wrote:> What we do have though is a new framework called Lockhart, this is > what the ZFS GUI is written in. It is the same framework that the > Web based GUI''s for most of the software in the Sun Java Enterprise > System are written in. This is the replacement for SMC.Oh dear. My experience with JES hasn''t been overwhelmingly positive (although yesterday''s announcement means that it''s probably time to take another look). And that would make it a web-based interface, right? (Which means running all sorts of things I would probably rather not.) I do notice that there''s a jni interface - libzfs_jni seems to be there, although I haven''t found (in the source tree either) the matching java code to talk to it. Is this a general-purpose java interface, or something specifically for the web interface? -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On Thu, 2005-12-01 at 14:26, Peter Tribble wrote:> On Wed, 2005-11-30 at 10:24, Darren J Moffat wrote: > > > What we do have though is a new framework called Lockhart, this is > > what the ZFS GUI is written in. It is the same framework that the > > Web based GUI''s for most of the software in the Sun Java Enterprise > > System are written in. This is the replacement for SMC. > > Oh dear. My experience with JES hasn''t been overwhelmingly positive > (although yesterday''s announcement means that it''s probably time to > take another look).But the framework is part of Solaris not JES and it is already there in Solaris 10.> And that would make it a web-based interface, right?Yes.> (Which means running all sorts of things I would probably rather not.)Really ? It doesn''t require you run Apache or the Sun WebServer. So what things would you rather not be running that you think this causes to run ?> I do notice that there''s a jni interface - libzfs_jni seems to be > there, although I haven''t found (in the source tree either) the > matching java code to talk to it. Is this a general-purpose java > interface, or something specifically for the web interface?So can I gather from that that you aren''t of the camp of Java GUIs are evil, just of the camp of "Web GUIs are evil" ? [ BTW I''m in party in the later camp but I think for quite different reasons to you ]. -- Darren J Moffat
Casper.Dik at Sun.COM
2005-Dec-01 14:45 UTC
zpool vs zfs [was Re: [zfs-discuss] IO statistics, kstats?]
>Use zpool to create a pool, and you get a filesystem thrown in. This >is something I really don''t want. I want to use zpool to create the >pools, and zfs to manage the filesystems. I don''t want zpool doing >filesystem creation against my wishes.I must admit I was unpleasantly surprised by this last minute change as well.>So, how to turn off this top-level filesystem? It''s easy enough to >do, just setting the mountpoint to "none". Unfortunately, all >subsequent filesystems created in that pool inherit that, so they >need to be fixed up individually, which makes it undesirable. > >Any other way to cleanly disable that default filesystem?zfs destroy perhaps? Also, I noticed that this change also throws off liveupgrade; liveupgrading a system to b28 caused liveupgrade to create /export /export/fs1 /export/fs2 etc for each and every mountpoint. Subsequent boot failed because zfs didn''t want to mount /export on top of a non-empty directory. There are some other interactions such as the tftp loopbackmount which doesn''t work with ZFS (because the ZFS and legacy mounts do not cooperate) Casper
On Thu, Dec 01, 2005 at 02:15:12PM +0000, Peter Tribble wrote:> On Wed, 2005-11-30 at 06:44, Jeff Bonwick wrote: > > > We took a similar approach with the zpool and zfs commands. The > > zpool command groups all functions related to pool management; the > > zfs command groups all functions related to filesystem management. > > Or, if you prefer: zpool manages devices, zfs manages data. > > Well, almost. > > Use zpool to create a pool, and you get a filesystem thrown in. This > is something I really don''t want. I want to use zpool to create the > pools, and zfs to manage the filesystems. I don''t want zpool doing > filesystem creation against my wishes.What is your use case? You seem to want to leverage inherited mountpoints (as stated below), but you _don''t_ want a toplevel filesystem? So you explicitly want a mix of UFS directories and ZFS filesystems in the same directory? Am I understanding this correctly? I just need to understand what "I really don''t want" actually means. Does this make deployment difficult, or is it just something that''s aesthecially unpleasing? During the days of ''containers'', we found many more people were confused when you had directory hierarchies dependent on underlying UFS directories. For this and other reasons, we eliminated the concept alltogether.> So, how to turn off this top-level filesystem? It''s easy enough to > do, just setting the mountpoint to "none". Unfortunately, all > subsequent filesystems created in that pool inherit that, so they > need to be fixed up individually, which makes it undesirable.You can''t, because this concept doesn''t exist in ZFS. We have talked about a ''nomount'' option to allow a filesystem to have a mountpoint but not get mounted. We understand how this could be used, but have yet to come up with a convincing real world case where it was necessary.> Any other way to cleanly disable that default filesystem?No. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Thu, Dec 01, 2005 at 02:26:03PM +0000, Peter Tribble wrote:> > And that would make it a web-based interface, right?Yes.> I do notice that there''s a jni interface - libzfs_jni seems to be > there, although I haven''t found (in the source tree either) the > matching java code to talk to it. Is this a general-purpose java > interface, or something specifically for the web interface?This is not a generic JNI interface. In particular, there is no way to initiate actions (such as filesystem creation or pool modification). It is designed solely for use by the GUI, which lives in a separate consolidation. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Peter Tribble
2005-Dec-01 15:04 UTC
zpool vs zfs [was Re: [zfs-discuss] IO statistics, kstats?]
On Thu, 2005-12-01 at 14:45, Casper.Dik at Sun.COM wrote:> >Use zpool to create a pool, and you get a filesystem thrown in....> >Any other way to cleanly disable that default filesystem? > > zfs destroy perhaps?Nope: # zfs destroy test cannot destroy ''test'': operation does not apply to pools use ''zfs destroy -r test'' to destroy all datasets in the pool use ''zpool destroy test'' to destroy the pool itself Not good. # zfs destroy -r test Destroys all the filesystems *except* the one I want to get rid of. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On Thu, 2005-12-01 at 14:52, Eric Schrock wrote:> On Thu, Dec 01, 2005 at 02:15:12PM +0000, Peter Tribble wrote: > > On Wed, 2005-11-30 at 06:44, Jeff Bonwick wrote: > > > > > We took a similar approach with the zpool and zfs commands. The > > > zpool command groups all functions related to pool management; the > > > zfs command groups all functions related to filesystem management. > > > Or, if you prefer: zpool manages devices, zfs manages data. > > > > Well, almost. > > > > Use zpool to create a pool, and you get a filesystem thrown in. This > > is something I really don''t want. I want to use zpool to create the > > pools, and zfs to manage the filesystems. I don''t want zpool doing > > filesystem creation against my wishes.[snip]> During the days of ''containers'', we found many more people were confused > when you had directory hierarchies dependent on underlying UFS > directories. For this and other reasons, we eliminated the concept > alltogether.don''t we need this top level one for when the root filesystem can be on ZFS ? -- Darren J Moffat
On Thu, Dec 01, 2005 at 03:27:32PM +0000, Darren J Moffat wrote:> > > During the days of ''containers'', we found many more people were confused > > when you had directory hierarchies dependent on underlying UFS > > directories. For this and other reasons, we eliminated the concept > > alltogether. > > don''t we need this top level one for when the root filesystem > can be on ZFS ?ZFS as a root filesystem is going to be special in many ways. It''s not going to use the normal ''zfs mount'' code, and the method by which it is created is going to be very different. For example, it would be quite easy to have (using totally faked up commands): # zpool create foo ... # zfs create foo/root # zfs makeroot foo/root Now, when we reboot, ''foo/root'' will be mounted at ''/'', and the ''/foo'' mountpoint will be a directory within that filesystem. So I don''t really understand why we need containers... - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Darren J Moffat wrote:> On Thu, 2005-12-01 at 14:52, Eric Schrock wrote: > >>On Thu, Dec 01, 2005 at 02:15:12PM +0000, Peter Tribble wrote: >> >>>On Wed, 2005-11-30 at 06:44, Jeff Bonwick wrote: >>> >>> >>>>We took a similar approach with the zpool and zfs commands. The >>>>zpool command groups all functions related to pool management; the >>>>zfs command groups all functions related to filesystem management. >>>>Or, if you prefer: zpool manages devices, zfs manages data. >>> >>>Well, almost. >>> >>>Use zpool to create a pool, and you get a filesystem thrown in. This >>>is something I really don''t want. I want to use zpool to create the >>>pools, and zfs to manage the filesystems. I don''t want zpool doing >>>filesystem creation against my wishes. > > > [snip] > > >>During the days of ''containers'', we found many more people were confused >>when you had directory hierarchies dependent on underlying UFS >>directories. For this and other reasons, we eliminated the concept >>alltogether. > > > don''t we need this top level one for when the root filesystem > can be on ZFS ?No. We will be supporting multiple root file systems per zpool so that multiple boot environments (BEs in liveupgrade terminology) can exist within the same pool. So we don''t have any special claim or need for the top level filesystem. There might be good reasons to have this top-level file system, but boot is not one of them. Lori Alt
On Thu, 2005-12-01 at 14:52, Eric Schrock wrote:> On Thu, Dec 01, 2005 at 02:15:12PM +0000, Peter Tribble wrote: > > Use zpool to create a pool, and you get a filesystem thrown in. This > > is something I really don''t want. I want to use zpool to create the > > pools, and zfs to manage the filesystems. I don''t want zpool doing > > filesystem creation against my wishes. > > What is your use case? You seem to want to leverage inherited > mountpoints (as stated below), but you _don''t_ want a toplevel > filesystem?Yes, that''s right. I just want the toplevel to be a container. Three reasons: 1. I want to make sure that data actually goes into the filesystems I create. Having an extra toplevel filesystem creates the opportunity for someone to come along and write data into it, outside my data management policy. (Yes, I know that people will always find some way of writing data in the wrong place.) 2. I want to have multiple pools share the same toplevel mountpoint. Normally, /export or some relative, as it happens. This gets messy because it then tries to mount second and subsequent ones over the top of the first one and this doesn''t work. 3. I want all the filesystems I create to either be equivalent or to have inheritance patterns I specify. The toplevel filesystem breaks the symmetry and makes it much harder to manage the data. (I can''t, for example, set a tiny quota on the toplevel filesystem because that has an impact on all the other filesystems.)> So you explicitly want a mix of UFS directories and ZFS > filesystems in the same directory?Not explicitly. But just because /export/foo is in a pool doesn''t mean that /export has to be in the same pool. It could be ufs; it could (eventually) be zfs on the root filesystem. And yes, I do want to mix ufs and zfs filesystems in /export. And I want to mix zfs filesystems from different pools in the same directory.> Am I understanding this correctly? > I just need to understand what "I really don''t want" actually means. > Does this make deployment difficult, or is it just something that''s > aesthecially unpleasing?The sharing the toplevel mountpoint issue is a real problem. (But I do find the current status aesthetically undesirable.)> We have talked > about a ''nomount'' option to allow a filesystem to have a mountpoint but > not get mounted. We understand how this could be used, but have yet to > come up with a convincing real world case where it was necessary.That sounds like exactly what I''m after! -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
>>>>> "RB" == Rich Brown <Rich.Brown at Sun.COM> writes:RB> My apologies for not responding sooner. I''m travelling and I''m backed RB> up on e-mail. RB> Stephen: You''ve hit the nail on the head. ''fsstat'' uses kstat(7D) but RB> these kstats aren''t being publicly advertised. There are some issues RB> that need to be addressed before that could happen. And those issue are ... ? Matt -- Matt Simmons - simmonmt at eng.sun.com | Solaris Kernel - New York Oh, you want *substance*. I don''t do substance. Frivolous whimsy is my speciality. -- Lars Magne Ingebrigtsen <larsi at gnus.org>
Thanks for the input. For the near future, you''ll have to deal with explicit mountpoints, with a mountpoint of ''none'' for the pool. You can still leverage all the other inherited properties, just not ''mountpoint''. Sounds like we''ll have to implement the ''nomount'' property at some point; we''ll try to adjust the priority with respect to other planned features. - Eric On Thu, Dec 01, 2005 at 04:19:05PM +0000, Peter Tribble wrote:> On Thu, 2005-12-01 at 14:52, Eric Schrock wrote: > > On Thu, Dec 01, 2005 at 02:15:12PM +0000, Peter Tribble wrote: > > > Use zpool to create a pool, and you get a filesystem thrown in. This > > > is something I really don''t want. I want to use zpool to create the > > > pools, and zfs to manage the filesystems. I don''t want zpool doing > > > filesystem creation against my wishes. > > > > What is your use case? You seem to want to leverage inherited > > mountpoints (as stated below), but you _don''t_ want a toplevel > > filesystem? > > Yes, that''s right. I just want the toplevel to be a container. > > Three reasons: > > 1. I want to make sure that data actually goes into the filesystems > I create. Having an extra toplevel filesystem creates the opportunity > for someone to come along and write data into it, outside my data > management policy. (Yes, I know that people will always find some > way of writing data in the wrong place.) > > 2. I want to have multiple pools share the same toplevel mountpoint. > Normally, /export or some relative, as it happens. This gets messy > because it then tries to mount second and subsequent ones over the > top of the first one and this doesn''t work. > > 3. I want all the filesystems I create to either be equivalent or > to have inheritance patterns I specify. The toplevel filesystem > breaks the symmetry and makes it much harder to manage the data. > (I can''t, for example, set a tiny quota on the toplevel filesystem > because that has an impact on all the other filesystems.) > > > So you explicitly want a mix of UFS directories and ZFS > > filesystems in the same directory? > > Not explicitly. But just because /export/foo is in a pool doesn''t > mean that /export has to be in the same pool. It could be ufs; it > could (eventually) be zfs on the root filesystem. And yes, I do want > to mix ufs and zfs filesystems in /export. And I want to mix zfs > filesystems from different pools in the same directory. > > > Am I understanding this correctly? > > I just need to understand what "I really don''t want" actually means. > > Does this make deployment difficult, or is it just something that''s > > aesthecially unpleasing? > > The sharing the toplevel mountpoint issue is a real problem. > > (But I do find the current status aesthetically undesirable.) > > > We have talked > > about a ''nomount'' option to allow a filesystem to have a mountpoint but > > not get mounted. We understand how this could be used, but have yet to > > come up with a convincing real world case where it was necessary. > > That sounds like exactly what I''m after! > > -- > -Peter Tribble > L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ >-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Thu, 2005-12-01 at 11:19, Peter Tribble wrote:> (I can''t, for example, set a tiny quota on the toplevel filesystem > because that has an impact on all the other filesystems.)That''s a trick I used on AFS (where quota doesn''t inherit); I''d set a small quota on "structural" filesystems which only contained mountpoints (conceptually equivalent to /home) so that other admins who didn''t understand the appropriate would realize quickly that they should create a new filesystem for each subdirectory at certain levels in the hierarchy. It''s not immediately obvious how to do this for zfs without adding a second quota -- inherited vs self quota -- and that really starts to complicate the administrative model. - Bill
Hey Peter, On Thu, 2005-12-01 at 16:19 +0000, Peter Tribble wrote:> Yes, that''s right. I just want the toplevel to be a container.> Three reasons: > > 1. I want to make sure that data actually goes into the filesystems > I create. Having an extra toplevel filesystem creates the opportunity > for someone to come along and write data into it, outside my data > management policy.Right, though you could "zfs set readonly=on <toplevel>" and then set readonly=off beneath that ?> 2. I want to have multiple pools share the same toplevel mountpoint. > Normally, /export or some relative, as it happens. This gets messy > because it then tries to mount second and subsequent ones over the > top of the first one and this doesn''t work.Chris had some automounter abuse that makes this sort of thing work I think : http://blogs.sun.com/roller/page/chrisg?entry=zfs_snapshots_meet_automounter_and - just like multiple user directories on UFS file systems on various disks can all appear in /home when automounted... (though I''m a fan of the way ZFS has multiple pools appear in separate pool subdirectories : it gives me a better feel as to who''s are using what disk space - why would I want to go back to the "old way" <shudder/> ?)> 3. I want all the filesystems I create to either be equivalent or > to have inheritance patterns I specify. The toplevel filesystem > breaks the symmetry and makes it much harder to manage the data. > (I can''t, for example, set a tiny quota on the toplevel filesystem > because that has an impact on all the other filesystems.)Mm, I understand the inheritance thing : in this case, needing to manually set all children to readonly=off, though you could create a "sub filesystem" set to readonly=on, and then create children off that without needing to do extra work, eg. # zpool create timspool c0t0d0 # zfs set readonly=on timspool # zfs create timspool/mygroup # cat > timspool/mygroup/foo timspool/mygroup/foo: cannot create (erk) # zfs set readonly=off timspool/mygroup # zfs create timspool/mygroup/timf # zfs create timspool/mygroup/mike # zfs create timspool/mygroup/jim # cat > timspool/mygroup/timf/foo # zfs get readonly timspool NAME PROPERTY VALUE SOURCE timspool readonly on local # zfs get readonly timspool/mygroup NAME PROPERTY VALUE SOURCE timspool/mygroup readonly off local # zfs get readonly timspool/mygroup/timf NAME PROPERTY VALUE SOURCE timspool/mygroup/timf readonly off inherited from timspool/mygroup - of course, then people will try to store stuff in timspool/mygroup, and you''re back where you started from... With careful application of a script, and "zfs list -H -o name", it mightn''t be hard to set readonly=off for leaf file systems, and readonly=on for the others. If you have ideas for how a command line interface for the inheritance pattern could be achieved, without overly complicating the CLI or boggling users, I''d be interested in hearing about it (not that I can do anything about it - I''m just a test slave :-)> > So you explicitly want a mix of UFS directories and ZFS > > filesystems in the same directory? > > Not explicitly. But just because /export/foo is in a pool doesn''t > mean that /export has to be in the same pool. It could be ufs; it > could (eventually) be zfs on the root filesystem. And yes, I do want > to mix ufs and zfs filesystems in /export. And I want to mix zfs > filesystems from different pools in the same directory.Why ? (You could use the automounter to do these kind of things, but maybe if we understood the rationale for these more complex requirements, we''d be able to make more soothing or sympathetic noises and perhaps those overworked developer guys could add features where appropriate ?)> > Am I understanding this correctly? > > I just need to understand what "I really don''t want" actually means. > > Does this make deployment difficult, or is it just something that''s > > aesthecially unpleasing? > > The sharing the toplevel mountpoint issue is a real problem.Mm, automount ?> > We understand how this could be used, but have yet to > > come up with a convincing real world case where it was necessary. > > That sounds like exactly what I''m after!A convincing real world case ? - yeah, us too :-) cheers, tim -- Tim Foster, Sun Microsystems Inc, Operating Platforms Group Engineering Operations http://blogs.sun.com/timf
On Thu, 2005-12-01 at 19:23, Tim Foster wrote:> > 1. I want to make sure that data actually goes into the filesystems > > I create. Having an extra toplevel filesystem creates the opportunity > > for someone to come along and write data into it, outside my data > > management policy. > > Right, though you could "zfs set readonly=on <toplevel>" and then set > readonly=off beneath that ?Nope. The readonly setting is inherited. Worse, if you then try to create a new filesystem: # zfs create test/2 cannot mount ''/test/2'': unable to create mountpoint filesystem successfully created, but not mounted which makes sense, as test is readonly.> > 2. I want to have multiple pools share the same toplevel mountpoint. > > Normally, /export or some relative, as it happens. This gets messy > > because it then tries to mount second and subsequent ones over the > > top of the first one and this doesn''t work. > > Chris had some automounter abuse that makes this sort of thing work I > think :Well, yes, I had wondered about automounter abuse. And the idea that the top-level container behaves like an automount mountpoint (think /home) does have its attractions. But needing to use the automounter just to make zfs work seems overkill.> > 3. I want all the filesystems I create to either be equivalent or > > to have inheritance patterns I specify. The toplevel filesystem > > breaks the symmetry and makes it much harder to manage the data. > > (I can''t, for example, set a tiny quota on the toplevel filesystem > > because that has an impact on all the other filesystems.) > > Mm, I understand the inheritance thing : in this case, needing to > manually set all children to readonly=off,...> If you have ideas for how a command line interface for the inheritance > pattern could be achieved, without overly complicating the CLI or > boggling users, I''d be interested in hearing about it (not that I can do > anything about it - I''m just a test slave :-)The problem, as I see it, is that this toplevel filesystem is different from every other filesystem in the pool. The behaviour of the commands is inconsistent - you have zpool creating a filesystem, and zfs refusing to touch it because it''s a pool. It has a naming convention that doesn''t fit. Overall, it''s thoroughly confusing and very difficult to manage. It would be much much better to get rid of it.> > > So you explicitly want a mix of UFS directories and ZFS > > > filesystems in the same directory? > > > > Not explicitly. But just because /export/foo is in a pool doesn''t > > mean that /export has to be in the same pool. It could be ufs; it > > could (eventually) be zfs on the root filesystem. And yes, I do want > > to mix ufs and zfs filesystems in /export. And I want to mix zfs > > filesystems from different pools in the same directory. > > Why ? (You could use the automounter to do these kind of things, but > maybe if we understood the rationale for these more complex > requirements, we''d be able to make more soothing or sympathetic noises > and perhaps those overworked developer guys could add features where > appropriate ?)I''m not trying to do anything complex. I''m trying to keep things as simple and straightforward as possible! (And zfs, usually so jaw-droppingly useful, suddenly starts getting in the way.)> > > Am I understanding this correctly? > > > I just need to understand what "I really don''t want" actually means. > > > Does this make deployment difficult, or is it just something that''s > > > aesthecially unpleasing? > > > > The sharing the toplevel mountpoint issue is a real problem. > > Mm, automount ?Again, I don''t see why it should be necessary to add an additional layer of complexity to fix something that should just work. And this wouldn''t necessarily help, as this just adds another layer of indirection. What if I wanted to use the automounter for real, with a map: * server:/export/& But then I would need to use the automounter to create /export out of its constituent fragments. yes, there are (imaginative) ways round these issues. That''s not the point. The real point is that, by and large, zfs delivers on the promise of being simple to administer, but this particular feature simply doesn''t work.
On Thu, 2005-12-01 at 16:29, Eric Schrock wrote:> Thanks for the input. For the near future, you''ll have to deal with > explicit mountpoints, with a mountpoint of ''none'' for the pool. You can > still leverage all the other inherited properties, just not > ''mountpoint''.That''s OK. We''re only in the testing phase, after all.> Sounds like we''ll have to implement the ''nomount'' property at some > point; we''ll try to adjust the priority with respect to other planned > features.This would be non-inherited, right? Thanks again for listening. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Mostly naming issues and expansion issues. No showstoppers, but a couple things to get the appropriate "blessings" on. Rich Matthew Simmons wrote:>>>>>>"RB" == Rich Brown <Rich.Brown at Sun.COM> writes: > > > RB> My apologies for not responding sooner. I''m travelling and I''m backed > RB> up on e-mail. > > RB> Stephen: You''ve hit the nail on the head. ''fsstat'' uses kstat(7D) but > RB> these kstats aren''t being publicly advertised. There are some issues > RB> that need to be addressed before that could happen. > > And those issue are ... ? > > Matt >
On Thu, Dec 01, 2005 at 09:28:15PM +0000, Peter Tribble wrote:> On Thu, 2005-12-01 at 19:23, Tim Foster wrote: > Well, yes, I had wondered about automounter abuse. And the idea that the > top-level container behaves like an automount mountpoint (think /home) > does have its attractions. But needing to use the automounter just to > make zfs work seems overkill.I totally agree.> > > 3. I want all the filesystems I create to either be equivalent or > > > to have inheritance patterns I specify. The toplevel filesystem > > > breaks the symmetry and makes it much harder to manage the data. > > > (I can''t, for example, set a tiny quota on the toplevel filesystem > > > because that has an impact on all the other filesystems.) > > > > Mm, I understand the inheritance thing : in this case, needing to > > manually set all children to readonly=off, > ... > > If you have ideas for how a command line interface for the inheritance > > pattern could be achieved, without overly complicating the CLI or > > boggling users, I''d be interested in hearing about it (not that I can do > > anything about it - I''m just a test slave :-) > > The problem, as I see it, is that this toplevel filesystem is > different from every other filesystem in the pool. The behaviour of > the commands is inconsistent - you have zpool creating a filesystem, > and zfs refusing to touch it because it''s a pool. It has a naming > convention that doesn''t fit. Overall, it''s thoroughly confusing and > very difficult to manage. It would be much much better to get rid of > it.I don''t know how much exposure you had to ZFS before we integrated, but your proposal is exactly the way things used to behave, back in the days of ZFS containers. The idea was that you had the same ZFS namespace hierarchy that was also reflected in your filesystem namespace (like today). The difference was that containers had no data, only properties associated with them. So if you had something like this: Name Type Mountpoint/directory ---------------- ----------- -------------------- tank container /export tank/home container /export/home tank/home/billm filesystem /export/home/billm tank/home/casper filesystem /export/home/casper Then any files that were created in /export or /export/home "fell through" to the underlying UFS filesystem. Needless to say, this was a very confusing situation, especially since ZFS was maintaining its namespace from /export on down, but not all of the data created there would land in ZFS. Because of this, based on a bunch of feedback, we got rid of containers and made everything filesystems. This wound up having the exact effect that everyone here is complaining about. The feedback we received from our users on this (both before and after actual implementation) was quite positive. The short of it is, that whichever way we go, there will be some subset of people that don''t like it. The only alternative is to provide (yet another) switch/level for people to diddle with. And this is something we''ve tried to avoid when possible. As you might imagine the perceived inconsistency that is an issue in this thread was discussed long and hard by the ZFS team. After much debate, we felt that the best overall experience, given the goals of the ZFS project, was achieved with the solution we have now provided. If you have a suggestion for how you would want to modify the current implementation to achieve what you want without simply not creating the top-level filesystem in ZFS, we would be very interested in what you have to say. Perhaps a "nomount" option would be sufficient?> > > Not explicitly. But just because /export/foo is in a pool doesn''t > > > mean that /export has to be in the same pool. It could be ufs; it > > > could (eventually) be zfs on the root filesystem. And yes, I do want > > > to mix ufs and zfs filesystems in /export. And I want to mix zfs > > > filesystems from different pools in the same directory. > > > > Why ? (You could use the automounter to do these kind of things, but > > maybe if we understood the rationale for these more complex > > requirements, we''d be able to make more soothing or sympathetic noises > > and perhaps those overworked developer guys could add features where > > appropriate ?) > > I''m not trying to do anything complex. I''m trying to keep things as > simple and straightforward as possible! (And zfs, usually so > jaw-droppingly useful, suddenly starts getting in the way.)So what, exactly, are you trying to do that setting an explicit mountpoint on things does not accomplish?> yes, there are (imaginative) ways round these issues. That''s not the > point. The real point is that, by and large, zfs delivers on the > promise of being simple to administer, but this particular feature > simply doesn''t work.In you situation, it may make administration slightly more difficult, but I would not say that it simply doesn''t work. It does work as intended, and it does (according to the feedback we''ve received) make most people''s lives better. --Bill
On Thu, 2005-12-01 at 22:17, Bill Moore wrote:> On Thu, Dec 01, 2005 at 09:28:15PM +0000, Peter Tribble wrote: > > > > The problem, as I see it, is that this toplevel filesystem is > > different from every other filesystem in the pool. The behaviour of > > the commands is inconsistent - you have zpool creating a filesystem, > > and zfs refusing to touch it because it''s a pool. It has a naming > > convention that doesn''t fit. Overall, it''s thoroughly confusing and > > very difficult to manage. It would be much much better to get rid of > > it. > > I don''t know how much exposure you had to ZFS before we integrated, but > your proposal is exactly the way things used to behave, back in the days > of ZFS containers.Well, actually, I''ve been happily running and testing zfs for the best part of 18 months, so I guess that''s a lot of exposure!> The idea was that you had the same ZFS namespace > hierarchy that was also reflected in your filesystem namespace (like > today). The difference was that containers had no data, only properties > associated with them. So if you had something like this: > > Name Type Mountpoint/directory > -- -- -- > tank container /export > tank/home container /export/home > tank/home/billm filesystem /export/home/billm > tank/home/casper filesystem /export/home/casper > > Then any files that were created in /export or /export/home "fell > through" to the underlying UFS filesystem. Needless to say, this was a > very confusing situation, especially since ZFS was maintaining its > namespace from /export on down, but not all of the data created there > would land in ZFS.Having used it that way for 18 months, I never once got confused. I''ve used the new way for a week, and I''m thoroughly confused by the way it''s operating now. You don''t need to create a filesystem to manage the namespace, though. (I''m not convinced that you need to manage the namespace. I never had a problem with that.) You could create and manage the directories without having a proper filesystem. In just the exact same way that the .zfs/snapshot directories are created now, for example. (Or think back to the automount mount point analogy.)> As you might imagine the perceived inconsistency that is an issue in > this thread was discussed long and hard by the ZFS team. After much > debate, we felt that the best overall experience, given the goals of the > ZFS project, was achieved with the solution we have now provided. If > you have a suggestion for how you would want to modify the current > implementation to achieve what you want without simply not creating > the top-level filesystem in ZFS, we would be very interested in what you > have to say. Perhaps a "nomount" option would be sufficient?It would certainly go some way to addressing some of the specific issues. Unfortunately, it still leaves the inconsistencies in place (just hidden). And leaves other problems like the ''zpool destroy -f'' issue being discussed in another thread. Or perhaps allowing the user to manage the toplevel fileystem with zfs, so I could destroy it with zfs destroy? Or perhaps changing it so that ''zpool create tank'' just creates the pool; and you then ''zfs create tank'' to make the toplevel filesystem. This could even be the default, but at least would cleanly separate the functionality of zpool and zfs, and allow oddballs like me to override the default.> > I''m not trying to do anything complex. I''m trying to keep things as > > simple and straightforward as possible! (And zfs, usually so > > jaw-droppingly useful, suddenly starts getting in the way.) > > So what, exactly, are you trying to do that setting an explicit > mountpoint on things does not accomplish?As an example: zfs create -c peter/opt zfs set mountpoint=/opt peter/opt zfs create peter/opt/SUNWspro zfs create peter/opt/netbeans zfs create peter/opt/SUNWonbld ... Which works perfectly pre snv_27. Now, I can still do this, but I have a couple of problems: (1) the peter/opt is visible as a filesystem (and has to be mounted off at some strange place as I can''t use /opt), and (2) I have to set the mountpoint explicitly on each filesystem I create - not only is it more work, there''s the possibility I might forget.> In you situation, it may make administration slightly more difficult, > but I would not say that it simply doesn''t work. It does work as > intended, and it does (according to the feedback we''ve received) make > most people''s lives better.Fair enough. However, I''m rapidly becoming convinced that containers are an extremely useful administrative abstraction and that zfs is poorer for their absence. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On Thu, 2005-12-01 at 14:39, Darren J Moffat wrote:> On Thu, 2005-12-01 at 14:26, Peter Tribble wrote: > > On Wed, 2005-11-30 at 10:24, Darren J Moffat wrote: > > > > > What we do have though is a new framework called Lockhart, this is > > > what the ZFS GUI is written in. It is the same framework that the > > > Web based GUI''s for most of the software in the Sun Java Enterprise > > > System are written in. This is the replacement for SMC....> > (Which means running all sorts of things I would probably rather not.) > > Really ? It doesn''t require you run Apache or the Sun WebServer. > > So what things would you rather not be running that you think this > causes to run ?I don''t mind apache, although I always build it myself. What I really don''t like is all the CIM/WBEM, smc, snmp, sea, and whatever else cluttering up my machine.> So can I gather from that that you aren''t of the camp of Java GUIs are > evil, just of the camp of "Web GUIs are evil" ? [ BTW I''m in party in > the later camp but I think for quite different reasons to you ].No, I''m just in the camp of "bad GUIs are evil". That seems to cover most of them :-( But, I held off so I could actually test this thing (as it''s in build 28). And, you know, it''s not completely bad. There are certain things I don''t like, for sure, and a number of small issues that ought to be addressed, but it''s essentially functional. (Of course, it helps that it''s got a good and simple administrative model underneath.) Some notes on the web interface: The certificate belongs to the unqualified hostname. Is there a way to set the name on the certificate so that it matches what the rest of the world thinks? It''s pretty slow. And the process is sitting there using over 250meg of memory. The dull grey colors aren''t exactly inspiring. I don''t want vile clashes, but grey on grey is going a bit far. Is the help local or global? From it''s position in the main banner, next to logout, I would expect generic help on the web console, but it seems to be that it''s actually application-specific help. What''s the Version mean? Is this the version of web console, or zfs? It seems to be the zfs administration tool version, which isn''t what I expected. The version and help buttons change what they do depending on context, but there is nothing to indicate this. They''re both next to buttons that don''t depend on the context, confusing me even more. The console button is still active when you go to the home page. I click on ''Storage Pools'' in the left-hand tab, then select a pool. So I''m viewing a storage pool. It includes a section ''snapshots of this file system'' - but, hang on, this was a storage pool, so that''s confusing. Then it says ''datasets within this file system'' - but, hang on, this was a storage pool, so that''s confusing. Then, when you look at the datasets, there''s the top-level filesystem missing. Is it ''file system'' or ''filesystem''? I click on ''File Systems'' in the left-hand tab. This time, it shows me all the filesystems including the top-level one. However, it''s got the space used wrong for the top-level filesystem. It shows the total usage by this filesystem and its descendants, not that used just by this filesystem. (On the other hand, df shows the space used just by that filesystem, which is the number I want.) As a result, the space is shown as used twice. (It''s important to be able to see both numbers, but it''s not showing the one you would expect.) I click on ''Snapshots'' in the left-hand tab. The Size field in the table is very confusing. I would expect to see the overall space in the Size column - which corresponds to the referenced amount, not the used amount (which I probably also want to see in a separate column). My first thought on seeing a Size of 0 was "yikes - I''m sure there was data in that snapshot, where''s it all gone?". I go back to the top and try to create a filesystem. Why does it put this in a popup? I hate popups. Even worse, the browse buttons create yet another popup. The browse popups don''t work very well. For example, in the snapshot selection popup I expect to see a list of snapshots. I don''t expect to have to click a couple of time to expand the hierarchy before I can even see a snapshot. I put junk into the Parent file system box. And it complains it''s not a valid filesystem. If you can only choose an existing filesystem, why put up an input box that I can enter free text into? Why not have a dropdown list or simply a list I can click on. (Ditto the snapshot entry.) The tab order of the fields is wrong. After entering something in the parent field, tab should end me up in the name field. 4 steps to create a filesystem seems too many. Why not combine the review and preview steps? OK, so it was successful. I wanted to know that, but now it just leaves me hanging around. It updates the main browser window (which isn''t actually what I would expect an action in a popup window to do), but leaves the popup around. While viewing a filesystem, I clicked on the ''change properties'' link. Another of these irritating popup windows. Even worse, the popup window is quite small and the properties list larger than the frame in which it''s put, so you have to scroll the property table. In the properties section, each property has a Change... link next to it. This just gives you the standard property change popup window - I expected just that single property. An interesting nomenclature. While zfs may think in terms of vdevs, having my disk described as a virtual device is very confusing. It''s a real physical piece of spinning metal. Also under virtual devices, it describes partitions as disks. It''s important to know whether zfs is managing the entire disk or just a partition on it. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
On 12/9/05, Peter Tribble <P.Tribble at herts.ac.uk> wrote:> > On Thu, 2005-12-01 at 14:39, Darren J Moffat wrote: > > On Thu, 2005-12-01 at 14:26, Peter Tribble wrote: > > > On Wed, 2005-11-30 at 10:24, Darren J Moffat wrote: > > > > > > > What we do have though is a new framework called Lockhart, this is > > > > what the ZFS GUI is written in. It is the same framework that the > > > > Web based GUI''s for most of the software in the Sun Java Enterprise > > > > System are written in. This is the replacement for SMC. > ... > > > (Which means running all sorts of things I would probably rather not.) > > > > Really ? It doesn''t require you run Apache or the Sun WebServer. > > > > So what things would you rather not be running that you think this > > causes to run ? > > I don''t mind apache, although I always build it myself. What I > really don''t like is all the CIM/WBEM, smc, snmp, sea, and whatever > else cluttering up my machine. > > > So can I gather from that that you aren''t of the camp of Java GUIs are > > evil, just of the camp of "Web GUIs are evil" ? [ BTW I''m in party in > > the later camp but I think for quite different reasons to you ]. > > No, I''m just in the camp of "bad GUIs are evil". That seems to cover > most of them :-( > > But, I held off so I could actually test this thing (as it''s in build > 28). And, you know, it''s not completely bad. There are certain things > I don''t like, for sure, and a number of small issues that ought to be > addressed, but it''s essentially functional. (Of course, it helps that > it''s got a good and simple administrative model underneath.) > > Some notes on the web interface: > > The certificate belongs to the unqualified hostname. Is there a way to > set the name on the certificate so that it matches what the rest of the > world thinks? > > It''s pretty slow. And the process is sitting there using over 250meg of > memory.250MB to display few graphics? ZFS user interface is simple enough, Be a lot better just to hack together some tcl/TK or some php code and be a hell of a lot smaller. Perhaps the people that write these interfaces should be forced to use a 256MB blade 100, to see exactly how the other half lives. I was excited to look at this interface when I first read about it, but looks like I just lost interest, and hope its disabled by default now. Surely 250 MB would be better used as a disk cache, then eyecandy. Yes I have over a gigabyte of ram in my ZFS test box, but 25% of my memory for eyecandy display of my harddrives? James Dickens uadmin.blogspot.com The dull grey colors aren''t exactly inspiring. I don''t want vile> clashes, but grey on grey is going a bit far. > > Is the help local or global? From it''s position in the main banner, > next to logout, I would expect generic help on the web console, but it > seems to be that it''s actually application-specific help. > > What''s the Version mean? Is this the version of web console, or zfs? It > seems to be the zfs administration tool version, which isn''t what I > expected. > > The version and help buttons change what they do depending on context, > but there is nothing to indicate this. They''re both next to buttons > that don''t depend on the context, confusing me even more. > > The console button is still active when you go to the home page. > > > I click on ''Storage Pools'' in the left-hand tab, then select a pool. So > I''m viewing a storage pool. It includes a section ''snapshots of this > file system'' - but, hang on, this was a storage pool, so that''s > confusing. Then it says ''datasets within this file system'' - but, hang > on, this was a storage pool, so that''s confusing. Then, when you look > at the datasets, there''s the top-level filesystem missing. > > Is it ''file system'' or ''filesystem''? > > > I click on ''File Systems'' in the left-hand tab. This time, it shows me > all the filesystems including the top-level one. However, it''s got the > space used wrong for the top-level filesystem. It shows the total usage > by this filesystem and its descendants, not that used just by this > filesystem. (On the other hand, df shows the space used just by that > filesystem, which is the number I want.) As a result, the space is > shown as used twice. (It''s important to be able to see both numbers, > but it''s not showing the one you would expect.) > > > I click on ''Snapshots'' in the left-hand tab. The Size field in the > table is very confusing. I would expect to see the overall space in the > Size column - which corresponds to the referenced amount, not the used > amount (which I probably also want to see in a separate column). My > first thought on seeing a Size of 0 was "yikes - I''m sure there was > data in that snapshot, where''s it all gone?". > > > I go back to the top and try to create a filesystem. Why does it put > this in a popup? I hate popups. Even worse, the browse buttons create > yet another popup. > > The browse popups don''t work very well. For example, in the snapshot > selection popup I expect to see a list of snapshots. I don''t expect to > have to click a couple of time to expand the hierarchy before I can > even see a snapshot. > > I put junk into the Parent file system box. And it complains it''s not a > valid filesystem. If you can only choose an existing filesystem, why > put up an input box that I can enter free text into? Why not have a > dropdown list or simply a list I can click on. (Ditto the snapshot > entry.) > > The tab order of the fields is wrong. After entering something in the > parent field, tab should end me up in the name field. > > 4 steps to create a filesystem seems too many. Why not combine the > review and preview steps? > > OK, so it was successful. I wanted to know that, but now it just leaves > me hanging around. It updates the main browser window (which isn''t > actually what I would expect an action in a popup window to do), but > leaves the popup around. > > > While viewing a filesystem, I clicked on the ''change properties'' > link. Another of these irritating popup windows. Even worse, the popup > window is quite small and the properties list larger than the frame in > which it''s put, so you have to scroll the property table. > > In the properties section, each property has a Change... link next to > it. This just gives you the standard property change popup window - I > expected just that single property. > > > An interesting nomenclature. While zfs may think in terms of vdevs, > having my disk described as a virtual device is very confusing. It''s a > real physical piece of spinning metal. > > Also under virtual devices, it describes partitions as disks. It''s > important to know whether zfs is managing the entire disk or just a > partition on it. > > > -- > -Peter Tribble > L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20051209/1c2231f1/attachment.html>