i made a mistake and created my zpool on a partition (c2t0d0p0). i can''t attach another identical whole drive (c3t0d0) to this pool, i get an error that the new drive is too small (i''d have thought it would be bigger!) the mount point of the "top" dataset is ''none'', and various datasets in the pool have fixed (not inherited) mount points. if i do ''zfs send -R data at 0 | zfs recv -dF data2'' it stops at the first filesystem mount point that is not empty. ie, as soon as it receives and sets the mountpoint property it quits because at that point it actually tries to mount the new replicated dataset and fails because zfs won''t shadow directories i guess. i believe the last example at <http://docs.sun.com/app/docs/doc/817-2271/gfwqb?a=view> works because at the top level (users), the mountpoint is just the default, not explicit, so when replicated to users2, it becomes an implicit /users2 instead of an explicit /users. so, is there a way to tell zfs not to perform the mounts for data2? or another way i can replicate the pool on the same host, without exporting the original pool? somewhat related question, any way to tell zfs it''s ok to shadow a directory? i would like to create datasets for /usr/local dirs in each sparse zone, however because /usr is inherited and the global zone''s /usr/local is populated, when the zone boots with a dataset whose mountpoint is /usr/local, it won''t mount. if i made /usr/local a separate dataset in the global zone would that work? (i can''t test this right now.) -frank
On Fri, 30 Jan 2009, Frank Cusack wrote:> > so, is there a way to tell zfs not to perform the mounts for data2? or > another way i can replicate the pool on the same host, without exporting > the original pool?There is not a way to do that currently, but I know it''s coming down the road. Regards, markm
On January 30, 2009 1:09:49 PM -0500 Mark J Musante <mmusante at east.sun.com> wrote:> On Fri, 30 Jan 2009, Frank Cusack wrote: >> >> so, is there a way to tell zfs not to perform the mounts for data2? or >> another way i can replicate the pool on the same host, without exporting >> the original pool? > > There is not a way to do that currently, but I know it''s coming down the > road.just for closure, a likely solution (seems correct, but i am unable to test just now) was presented in another thread. i note the answer here so that a search which finds this thread has both the question and answer in the same place. On January 31, 2009 10:57:11 AM +0100 Kees Nuyt <k.nuyt at zonnet.nl> wrote:> That property is called "canmount". > man zfs > /canmounti didn''t test, but it seems that setting canmount to noauto, replicating, then changing canmount back to on, would do the trick. -frank
On January 30, 2009 9:58:56 AM -0800 Frank Cusack <fcusack at fcusack.com> wrote:> somewhat related question, any way to tell zfs it''s ok to shadow a > directory? i would like to create datasets for /usr/local dirs in > each sparse zone, however because /usr is inherited and the global > zone''s /usr/local is populated, when the zone boots with a dataset > whose mountpoint is /usr/local, it won''t mount. if i made /usr/local > a separate dataset in the global zone would that work? (i can''t > test this right now.)the answer to this appears to be no, and yes. no, there is apparently no way to tell zfs it''s ok to shadow a directory, ie to mount a zfs dataset on top of a non-empty directory. and yes, making /usr/local a zfs dataset in the global zone makes it empty in sparse zones (similar to the nfs export problem i guess) and then a zoned dataset can be mounted on top. -frank
> On January 30, 2009 1:09:49 PM -0500 Mark J Musante <mmusante at > east.sun.com> > wrote: >> On Fri, 30 Jan 2009, Frank Cusack wrote: >>> >>> so, is there a way to tell zfs not to perform the mounts for data2? or >>> another way i can replicate the pool on the same host, without exporting >>> the original pool? >> >> There is not a way to do that currently, but I know it''s coming down the >> road. > > just for closure, a likely solution (seems correct, but i am unable to > test just now) was presented in another thread. i note the answer here > so that a search which finds this thread has both the question and answer > in the same place. > > On January 31, 2009 10:57:11 AM +0100 Kees Nuyt <k.nuyt at zonnet.nl> > wrote: >> That property is called "canmount". >> man zfs >> /canmount > > i didn''t test, but it seems that setting canmount to noauto, replicating, > then changing canmount back to on, would do the trick.It turns out this doesn''t work for datasets that are mounted in the global zone that you can''t unmount. Setting the canmount property to ''noauto'' has the side effect (why?) of immediately unmounting, and failing if it can''t do so. For datasets which are zoned, if you are running the ''zfs set'' in the global zone, the dataset remains mounted in the zone. But for datasets mounted in the global zone, e.g. being served via NFS, the ''zfs set'' fails. Funny though, I wrote the above and tested a few more times, and now I do have one of my home directories'' canmount property set to ''noauto'', and I can no longer change it back to ''on''. How it got set to ''noauto'' is a mystery as it was never unmounted during the brief time I have been composing this email, and I was consistently getting an error message from zfs about it being in use. -frank
Turns out setting altroot is the way to do this. Thanks to David Dyer-Bennet for the solution, given in another thread. -frank
> Turns out setting altroot is the way to do this.Doesn''t work for the root pool. Once you get to the root filesystem, mounted on /, zfs attempts to mount it. Even though you are using an altroot, / now maps to /altroot, which is of course already occupied. :( -frank