I have pool called "data". I have zones configured in that pool. The zonepath is: /data/zone1/fs. (/data/zone1 itself is not used for anything else, by anyone, and has no other data.) There are no datasets being delegated to this zone. I want to create a snapshot that I would want to make available from within the zone. What are the best options? If I do something like: zfs snapshot data/zone1 at 1hrbackup How do I make that snapshot available to the zone? It seems like I got two options: 1. add dataset set name=data/zone1/recover end Then: zfs send data/zone1 at 1hrbackup | zfs recv data/zone1/recover at 1hrbackup I think this option might work, but using zfs send will just send the whole data/zone1 file system which will use more disk space instead of just sending snapshots. 2. I was thinking, maybe I could do a NFS share of /data/zone1/.zfs/snapshot to zone1. Then, do a NFS client access to that file system. Thanks, hope thats clear. This message posted from opensolaris.org
Anil Jangity wrote:> I have pool called "data". > > I have zones configured in that pool. The zonepath is: /data/zone1/fs. > (/data/zone1 itself is not used for anything else, by anyone, and has no > other data.) There are no datasets being delegated to this zone. > > I want to create a snapshot that I would want to make available from > within the zone. What are the best options? > > If I do something like: zfs snapshot data/zone1 at 1hrbackup > > How do I make that snapshot available to the zone? > > It seems like I got two options: 1. add dataset set > name=data/zone1/recover end > > Then: zfs send data/zone1 at 1hrbackup | zfs recv > data/zone1/recover at 1hrbackup > > I think this option might work, but using zfs send will just send the > whole data/zone1 file system which will use more disk space instead of > just sending snapshots. > > > 2. I was thinking, maybe I could do a NFS share of > /data/zone1/.zfs/snapshot to zone1. Then, do a NFS client access to that > file system.Hi Anil, I can''t answer your questions directly, but I''d like to mention some of the things I''ve learnt while running zones with zoneroots on a zpool: - at this point in time, LiveUpgrade doesn''t handle zones with zoneroot on zfs. It''s being worked on. In the meantime, put your zoneroot on ufs. - It''s better to do loopback mounts of your zfs so that they''re visible in the zone, rather than nfs mounts. I''ve been told that this is due to the way that these two fs types interact with memory allocation. (Note that I might have that reasoning expressed badly - vm isn''t my area). Here''s what it looks like in my zone config file: <filesystem special="/export/home/jmcp" directory="/export/home/jmcp" type="lofs"> <fsoption name="rw"/> </filesystem> There''s handy content here, too: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_Zones James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Anil Jangity wrote:> I have pool called "data". > > I have zones configured in that pool. The zonepath is: /data/zone1/fs. (/data/zone1 itself is not used for anything else, by anyone, and has no other data.) There are no datasets being delegated to this zone. > > I want to create a snapshot that I would want to make available from within the zone. What are the best options? > > If I do something like: > zfs snapshot data/zone1 at 1hrbackup > > How do I make that snapshot available to the zone?I have a zone on zfs in which I make my homedir (also a zfs) visible via lofs. By setting the snapdir property to visible within the global zone, my snapshot is visible in the local zone via the .zfs directory. As James mentioned, that means reinstalling the zone every time I liveupgrade, but I''m ok with that. Hope that helps. -- John Wren Kennedy Solaris RPE
Thanks James/John! That link specifically mentions "new Solaris 10 release", so I am assuming that means going from like u4 to Sol 10 u5, and that shouldn''t cause a problem when doing plain patchadd''s (w/o live upgrade). If so, then I am fine with those warnings and can use zfs with zones'' path. So, to do that lofs mount, I could do something like: zfs set snapdir=visible data/zone1 add fs set dir=/data/zone1/.zfs set special=/data/zone1/zfsfiles set type=lofs end Then, from inside the zone, I should be able to do something like: ls /data/zone1/zfsfiles/snapshot/1hrbackup Please correct me if I am wrong. (Just want to make sure I got it right, before I go try this on this semi-production system. Unfortunately, I don''t have a test system on hand to play with right now.) This message posted from opensolaris.org
James C. McPherson wrote:> Anil Jangity wrote: > >> I have pool called "data". >> >> I have zones configured in that pool. The zonepath is: /data/zone1/fs. >> (/data/zone1 itself is not used for anything else, by anyone, and has no >> other data.) There are no datasets being delegated to this zone. >> >> I want to create a snapshot that I would want to make available from >> within the zone. What are the best options? >> >> If I do something like: zfs snapshot data/zone1 at 1hrbackup >> >> How do I make that snapshot available to the zone? >> >> It seems like I got two options: 1. add dataset set >> name=data/zone1/recover end >> >> Then: zfs send data/zone1 at 1hrbackup | zfs recv >> data/zone1/recover at 1hrbackup >> >> I think this option might work, but using zfs send will just send the >> whole data/zone1 file system which will use more disk space instead of >> just sending snapshots. >> >> >> 2. I was thinking, maybe I could do a NFS share of >> /data/zone1/.zfs/snapshot to zone1. Then, do a NFS client access to that >> file system. >> > > > Hi Anil, > I can''t answer your questions directly, but I''d like to > mention some of the things I''ve learnt while running zones > with zoneroots on a zpool: > > - at this point in time, LiveUpgrade doesn''t handle zones > with zoneroot on zfs. It''s being worked on. In the meantime, > put your zoneroot on ufs. > > > - It''s better to do loopback mounts of your zfs so that > they''re visible in the zone, rather than nfs mounts. I''ve > been told that this is due to the way that these two fs > types interact with memory allocation. (Note that I might > have that reasoning expressed badly - vm isn''t my area). > >I don''t know if anything else breaks when you do this, but if you are building software in a zone on a lofs filesystem, dmake hangs. Regular make works fine. The output from truss is: stat64("/export/home", 0x08045B60) = 0 llseek(8, 0, SEEK_CUR) = 0 llseek(8, 0, SEEK_SET) = 0 ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 repeated over and over. Ian
Ian Collins wrote: ...> I don''t know if anything else breaks when you do this, but if you are > building software in a zone on a lofs filesystem, dmake hangs. Regular > make works fine. > > The output from truss is: > > stat64("/export/home", 0x08045B60) = 0 > llseek(8, 0, SEEK_CUR) = 0 > llseek(8, 0, SEEK_SET) = 0 > ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 > ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 > ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 > > repeated over and over.Yup, I know. It''s a complete p.i.t.a and I logged http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6553206 6553206 bringover from lofs to zfs fails, from lofs to ufs succeeds to track it. My latest entries in the comments field are: ===================================================================I haven''t had a chance to update my system from snv_62 to 69 or later, but I have just found out the *very* interesting (to me!) piece of information that this bug does not occur when I run the bringover in the global zone to a zfs filesystem - the problem only happens in a non-global zone. This is the pstack from a bringover in the non-global zone, to the zfs-backed filesystem: # pstack 24339 24339: bringover -p /ws/onnv-gate -w /scratch/src/build/rfes/mfi usr fee2f807 _xstat (814bd21, 8045a60) + 7 080b13e7 __1cTavo_path_to_netpath6Fpc0_0_ (81529a0, 8045f0c) + e7 08083f59 __1cNAvo_workspaceIset_name6Mpc1_pnHAvo_err__ (8151d88, 804796a, 0) + 219 08083a12 __1cNAvo_workspace2t5B6MpcppnHAvo_err__v_ (8151d88, 804796a, 8047348) + 22 08088e59 __1cYavo_determine_default_ws6FpcppnNAvo_workspace__pnHAvo_err__ (804796a, 814ec44) + d9 080773fb __1cNAvo_bringoverKparse_args6M_pnHAvo_err__ (814ec30) + 27a 0807ce72 __1cPAvo_transactionLtransaction6M_pnHAvo_err__ (814ec30) + 22 08076de9 __1cNAvo_bringoverHcommand6M_v_ (814ec30) + 39 08076c58 main (6, 8047824, 8047840) + c8 08076afa ???????? (6, 804794c, 8047956, 8047959, 8047967, 804796a) with the xstat call hanging. *** (#2 of 3): 2007-09-07 17:00:45 EST james.mcpherson at sun.com This problem still occurs in snv_77. To recreate: * create a non-global zone. zoneroot can be on zfs or ufs * add a filesystem which is zfs in the global zone, but which is presented using lofs in the non-global zone * boot the zone * connect to the zone, via ssh or telnet * from inside the zone bringover from the parent to the workspace * twiddle thumbs I also tried to narrow down the problem by running the following test *in the global zone*: * create a test directory on a zfs in the glbal zone * cd to that directory * bringover from the parent workspace (which either on ufs or zfs) * watch the files come tumbling across to the test directory. This means that the problem is not immediately with zfs, but rather with the way that zones are handling lofs presentation of zfs. I also tried one further test - bringing over from the underlying directory that the automounter presents as a /ws workspace. This also failed to produce any output, with the bringover utility stuck looping in the same functions as mentioned earlier in this CR''s history. =================================================================== James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
James C. McPherson wrote:> Ian Collins wrote: > ... >> I don''t know if anything else breaks when you do this, but if you are >> building software in a zone on a lofs filesystem, dmake hangs. Regular >> make works fine. >> >> The output from truss is: >> >> stat64("/export/home", 0x08045B60) = 0 >> llseek(8, 0, SEEK_CUR) = 0 >> llseek(8, 0, SEEK_SET) = 0 >> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >> >> repeated over and over. > > Yup, I know. It''s a complete p.i.t.a and I logged > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6553206 > > 6553206 bringover from lofs to zfs fails, from lofs to ufs succeeds >Ah, so it is a ZFS/zone interaction issue rather than dmake. Interesting that make works, while dmake loops. Ian
Ian Collins wrote:> James C. McPherson wrote: >> Ian Collins wrote: >> ... >>> I don''t know if anything else breaks when you do this, but if you are >>> building software in a zone on a lofs filesystem, dmake hangs. Regular >>> make works fine. >>> >>> The output from truss is: >>> >>> stat64("/export/home", 0x08045B60) = 0 >>> llseek(8, 0, SEEK_CUR) = 0 >>> llseek(8, 0, SEEK_SET) = 0 >>> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >>> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >>> ioctl(8, ((''m''<<8)|7), 0x08045714) = 0 >>> >>> repeated over and over. >> Yup, I know. It''s a complete p.i.t.a and I logged >> >> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6553206 >> >> 6553206 bringover from lofs to zfs fails, from lofs to ufs succeeds >> > Ah, so it is a ZFS/zone interaction issue rather than dmake. > Interesting that make works, while dmake loops.So dmake, bringover and ws all result in twiddling of thumbs. It''s a royal pain, and unfortunately I don''t know enough to drill down enough :( James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
anilj at entic.net said:> That link specifically mentions "new Solaris 10 release", so I am assuming > that means going from like u4 to Sol 10 u5, and that shouldn''t cause a > problem when doing plain patchadd''s (w/o live upgrade). If so, then I am fine > with those warnings and can use zfs with zones'' path.I don''t think you can assume you will have no problems with patchadd. We had a couple systems running S10U3 (SPARC and x86), with zone roots on ZFS. The normal "smpatch add" or "patchadd" process worked for zones that were running, but had problems with starting halted zones for patching. The big trouble came with the deferred-activation type of patch, e.g. the large kernel update that added S10U4 features. This deferred-activation patch would not install on our systems with zone roots on ZFS, even when we had all other patches installed (including those for patch/install utils). Shutting down migrating all zone roots to UFS fixed the problems with patching non-running zones, and with deferred-activation patches. Regards, Marion
The question is, if you *temporarily* migrate your zones to UFS to install the big bad S10u4 patch, and migrate back to ZFS afterwards, will patches work after that? A better way to say that is, have we resolved this patch problem with zoneroot on zfs for S10u4? Tommy This message posted from opensolaris.org