With the current nv28 bits, when I create a zpool, I see it mounted as usable as a file system (following a ''zpool create zpool1 /dev/ramdisk/zfsd'') With earlier (pre-nv integration bits), I had a named zfs mounted only after an explicit ''zfs create'' on a previously created zpool. Does this reflect a recent design/feature change? I no longer need to do an explicit ''zfs create'' for a usable zfs, unless I wish to have multiple file systems on a single pool? Thanks - reading through the docs as well (trying to do the right thing)... /jim
Yes, prior to integration into Nevada we decided that when you create a pool, a filesystem(with the same name as the pool) will be created when the pool is created. So you are correct, you only need to do an explicit ''zfs create'' if you''d like other filesystems in your pool besides the one that is already created for you. Noel :-) ************************************************************************ ** "Question all the answers" On Nov 29, 2005, at 7:17 AM, Jim Mauro wrote:> > With the current nv28 bits, when I create a zpool, I see it mounted > as usable as a file system (following a ''zpool create zpool1 > /dev/ramdisk/zfsd'') > > With earlier (pre-nv integration bits), I had a named zfs mounted only > after an explicit ''zfs create'' on a previously created zpool. > > Does this reflect a recent design/feature change? > > I no longer need to do an explicit ''zfs create'' for a usable zfs, > unless I wish to have multiple file systems on a single pool? > > Thanks - reading through the docs as well (trying to do the right > thing)... > > /jim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss
Noel Dellofano wrote:> Yes, prior to integration into Nevada we decided that when you create a > pool, a filesystem(with the same name as the pool) will be created when > the pool is created. So you are correct, you only need to do an > explicit ''zfs create'' if you''d like other filesystems in your pool > besides the one that is already created for you.IMHO it''s a very good idea to only use the parent zpool filesystem as a container for explicitly created ZFS filesystems, or at least to always have a "dummy" ZFS filesystem within the pool. Reason: "zpool destroy" will not ask for any confirmation if you blow away a pool that doesn''t have an explicitly created ZFS filesystem within it, even if there''s lots of data sitting in the top-level zpool filesystem. [Just tried it to make sure I''m not talking through my hat... bye-bye 500MB of data.] Once you create a zfs filesystem within the pool, you at least have to remove that filesystem before a zpool destroy will blow the pool away. Jason =:^/
On 11/29/05, Jason Ozolins <Jason.Ozolins at anu.edu.au> wrote:> > Noel Dellofano wrote: > > Yes, prior to integration into Nevada we decided that when you create a > > pool, a filesystem(with the same name as the pool) will be created when > > the pool is created. So you are correct, you only need to do an > > explicit ''zfs create'' if you''d like other filesystems in your pool > > besides the one that is already created for you. > > IMHO it''s a very good idea to only use the parent zpool filesystem as a > container for explicitly created ZFS filesystems, or at least to always > have > a "dummy" ZFS filesystem within the pool. > > Reason: "zpool destroy" will not ask for any confirmation if you blow away > a > pool that doesn''t have an explicitly created ZFS filesystem within it, > even > if there''s lots of data sitting in the top-level zpool filesystem. [Just > tried it to make sure I''m not talking through my hat... bye-bye 500MB of > data.] > > Once you create a zfs filesystem within the pool, you at least have to > remove > that filesystem before a zpool destroy will blow the pool away. > > Jason =:^/ > _______________________________________________ >That''s a good point. I was also confused when I created my first pool and found out I can use it as a filesystem. It''s convenient when I need only one filesystem, but it also brings in an inconsistency in creating and removing (as mentioned by Jason) zfs, IMHO. Tao -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20051129/590303f7/attachment.html>
On 30/11/2005, at 10:44 AM, Jason Ozolins wrote:> Noel Dellofano wrote: >> Yes, prior to integration into Nevada we decided that when you >> create a pool, a filesystem(with the same name as the pool) will >> be created when the pool is created. So you are correct, you only >> need to do an explicit ''zfs create'' if you''d like other >> filesystems in your pool besides the one that is already created >> for you. > > IMHO it''s a very good idea to only use the parent zpool filesystem > as a container for explicitly created ZFS filesystems, or at least > to always have a "dummy" ZFS filesystem within the pool. > > Reason: "zpool destroy" will not ask for any confirmation if you > blow away a pool that doesn''t have an explicitly created ZFS > filesystem within it, even if there''s lots of data sitting in the > top-level zpool filesystem. [Just tried it to make sure I''m not > talking through my hat... bye-bye 500MB of data.] > > Once you create a zfs filesystem within the pool, you at least have > to remove that filesystem before a zpool destroy will blow the pool > away.Another good reason is that it seems to be impossible to restore snapshots from the pool back the way they were (maybe I''m missing something): bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 36.1M 2.92G 35.3M /mypool mypool at now 798K - 799K - bash-3.00# zfs backup mypool at now > /var/tmp/mybackup bash-3.00# zpool destroy mypool bash-3.00# zpool create mypool c0t2d0s0 bash-3.00# df -h Filesystem size used avail capacity Mounted on ... mypool 1000M 8K 1000M 1% /mypool bash-3.00# zfs restore mypool < /var/tmp/mybackup Can''t restore: destination fs mypool already exists bash-3.00# zfs restore mypool at now < /var/tmp/mybackup Can''t restore: destination fs mypool already exists bash-3.00# zfs restore -d mypool < /var/tmp/mybackup bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 834K 999M 8.50K /mypool mypool/mypool 798K 999M 798K /mypool/mypool mypool/mypool at now 0 - 798K - How do I get back mypool at now in the same place that it came from? P.S.: I miss "zfs ls"
> > Once you create a zfs filesystem within the pool, you at least have to > remove that filesystem before a zpool destroy will blow the pool away.I just wanted to point out to avoid future confusion, that actually current behavior is different then you specified above. Zfs will allow you to destroy a pool even if it has an active filesystem within it: Create a pool: [fsh-ika 17:31] # zpool create pool c0t1d0s0 [fsh-ika 17:43] # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 7.94G 164K 7.94G 0% ONLINE - Create a filesystem in the pool and put a file in it: [fsh-ika 17:33] # zfs create pool/fsA [fsh-ika 17:33] # ls /pool fsA [fsh-ika 17:33] # echo 1 >> pool/fsA/stuff.txt [fsh-ika 17:34] # ls pool/fsA stuff.txt Pool is destroyed along with the filesystem within it: [fsh-ika 17:36] # zpool destroy pool [fsh-ika 17:45] # zpool list no pools available There was some discussion previously on the open solaris list about adding a -f flag to the zpool destroy, in order to prevent situations like you mentioned previously and also the above situation. You can find that thread here: http://www.opensolaris.org/jive/thread.jspa?messageID=15575㳗 Noel :-) ************************************************************************ ** "Question all the answers" Noel :-) ************************************************************************ ** "Question all the answers" On Nov 29, 2005, at 3:44 PM, Jason Ozolins wrote:> Noel Dellofano wrote: >> Yes, prior to integration into Nevada we decided that when you create >> a pool, a filesystem(with the same name as the pool) will be created >> when the pool is created. So you are correct, you only need to do an >> explicit ''zfs create'' if you''d like other filesystems in your pool >> besides the one that is already created for you. > > IMHO it''s a very good idea to only use the parent zpool filesystem as > a container for explicitly created ZFS filesystems, or at least to > always have a "dummy" ZFS filesystem within the pool. > > Reason: "zpool destroy" will not ask for any confirmation if you blow > away a pool that doesn''t have an explicitly created ZFS filesystem > within it, even if there''s lots of data sitting in the top-level zpool > filesystem. [Just tried it to make sure I''m not talking through my > hat... bye-bye 500MB of data.] > > Once you create a zfs filesystem within the pool, you at least have to > remove that filesystem before a zpool destroy will blow the pool away. > > Jason =:^/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 30, 2005 at 05:01:54PM +1100, Boyd Adamson wrote:> On 30/11/2005, at 10:44 AM, Jason Ozolins wrote: > > Another good reason is that it seems to be impossible to restore > snapshots from the pool back the way they were (maybe I''m missing > something):I''m no backup/restore expert, but this sounds like a bug.> > P.S.: I miss "zfs ls" >Why? What is deficient in ''zfs list'' and ''zfs get all'' ? - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On 01/12/2005, at 10:01 AM, Eric Schrock wrote:> On Wed, Nov 30, 2005 at 05:01:54PM +1100, Boyd Adamson wrote: >> P.S.: I miss "zfs ls" > > Why? What is deficient in ''zfs list'' and ''zfs get all'' ?Oh, nothing... it''s more that my muscle memory wants to type ls (as an alias for list, a la "svn list / svn ls")
Boyd Adamson:> Oh, nothing... it''s more that my muscle memory wants to type ls (as > an alias for list, a la "svn list / svn ls")$ zfs() { [ $1 = ls ] && { shift; command zfs list $*; return; }; command zfs $*; } $ zfs ls NAME USED AVAIL REFER MOUNTPOINT dsk 27.5G 115G 35.8M /dsk k. :)
On Wed, Nov 30, 2005 at 11:59:51PM +0000, Kate wrote:> Boyd Adamson: > > Oh, nothing... it''s more that my muscle memory wants to type ls (as > > an alias for list, a la "svn list / svn ls") > > $ zfs() { [ $1 = ls ] && { shift; command zfs list $*; return; }; command zfs $*; }You should probably be quoting things: $ zfs() { [ "$1" = ls ] && { shift; command zfs list "$@"; return; }; command zfs "$@"; }> $ zfs ls > NAME USED AVAIL REFER MOUNTPOINT > dsk 27.5G 115G 35.8M /dsk > > k. :) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Jonathan Adams, Solaris Kernel Development
Jonathan Adams:> You should probably be quoting thingshm, you''re quite right. zsh has apparently taught me some bad habits by handling $* as "$@" in that context, whereas other shells don''t... k.
Noel Dellofano wrote: [>Jason Ozolins wrote]>> >> Once you create a zfs filesystem within the pool, you at least have >> to remove that filesystem before a zpool destroy will blow the pool >> away. > > I just wanted to point out to avoid future confusion, that actually > current behavior is different then you specified above. Zfs will allow > you to destroy a pool even if it has an active filesystem within it:D''oh! Sorry for any confusion caused. I must have confused it with the "-r" requirement when destroying a zfs dataset with children.> There was some discussion previously on the open solaris list about > adding a -f flag to the zpool destroy, in order to prevent situations > like you mentioned previously and also the above situation. You can > find that thread here: > http://www.opensolaris.org/jive/thread.jspa?messageID=15575㳗 > > Noel :-)Gah, having read that thread I see it''s a big can of worms. The way I see it, given that the zpool always has a toplevel zfs dataset associated with it, you can''t really have a policy of "destroying a zpool containing a dataset requires -f", because then every zpool destroy requires -f and it loses its meaning. :-( Someone did say that immediate recovery from a zpool destroy is not hard to implement. That would be nice. The suggested "nodelete" property for datasets would also be nice, for those of us prone to brain outages. -Jason =:^)