Cameron Jones
2009-Oct-16 00:36 UTC
[zfs-discuss] Mount ZFS on Dual Boot Machine (open)solaris
Hi, Sorry if this is a dumb question but i can''t seem to find an anwser anything close to what i am trying to figure out... I have a dual boot machine with 3 disks - 2 x 1TB mirror OpenSolaris with ZFS root pool 1x 250GB Solaris 10 with ZFS root pool This is a development\testing\home file server machine only. My question is tho, since I can boot into either OpenSolaris or Solaris (but not both at the same time obviousvly :) i''d like to be able to mount the other disks into whatever host OS i boot into. Is this possible & recommended? Is there any scope for inconsistency if say i upgrade OpenSolaris with new ZFS versions but continue mounting a mirror in Solaris with old versions? I guess i''m wondering if it''s possible if doing this could corrupt one or the other? Many thanks, cam -- This message posted from opensolaris.org
Frank Middleton
2009-Oct-16 01:07 UTC
[zfs-discuss] Mount ZFS on Dual Boot Machine (open)solaris
On 10/15/09 20:36, Cameron Jones wrote:> My question is tho, since I can boot into either OpenSolaris or > Solaris (but not both at the same time obviousvly :) i''d like to be > able to mount the other disks into whatever host OS i boot into.> Is this possible& recommended?Definitely possible. Where do you keep your user data? It isn''t clear that there is much utility in cross mounting rpools from Solaris/sxce to Open Solaris; better to keep your user data in one or more separate data pools and to just mount them. That simplifies backups, too.> Is there any scope for inconsistency if say i upgrade OpenSolaris > with new ZFS versions but continue mounting a mirror in Solaris with > old versions?You have to watch out for the gratuitous update-archive problem http://defect.opensolaris.org/bz/show_bug.cgi?id=11358 at reboot. Otherwise AFAIK you just have to be careful. So far ZFS seems to have kept backwards compatibility. Just don''t accidentally do a zpool upgrade :-). Because of 11358, I would not recommend cross mounting the rpools. But it isn''t clear that that is what you really want to achieve...> Many thanks, > cam
Cameron Jones
2009-Oct-16 03:31 UTC
[zfs-discuss] Mount ZFS on Dual Boot Machine (open)solaris
thanks! by cross-mounting do you mean mounting the drives on 2 running OS''s? that wasn''t really what i was looking for but nice to know the option is there, even tho not recommended! my only real aim was to have the 3 disks accessible when booting into either OS so i could share archived data between them. the solaris instance is only for testing so i haven''t got any real user data in there, it''s just for messing around in and trying out things on a real box before applying in production. the opensolaris instance is what i am developing in and for archiving, backups etc so would have this running most of the time. it sounds like i shouldn''t have any problem cold-cross-mounting :) although does bug 11358 only apply to opensolaris or would it also be possible to apply to solaris 10 too? also i thought i read in the doco that ZFS assigns an id to each drive which is unique to the OS - if i try to mount it into another OS would this id keep changing each time i switch? -- This message posted from opensolaris.org
Frank Middleton
2009-Oct-16 13:29 UTC
[zfs-discuss] Mount ZFS on Dual Boot Machine (open)solaris
On 10/15/09 23:31, Cameron Jones wrote:> > by cross-mounting do you mean mounting the drives on 2 running OS''s? > that wasn''t really what i was looking for but nice to know the option > is there, even tho not recommended!No, since you really can''t run two OSs at the same time unless you use zones. Maybe someone more expert than I could comment on the idea of running OpenSolaris on a Solaris 10 or sxce host - e.g., in the case of sxce, if they were both, say snv124?> my only real aim was to have the 3 disks accessible when booting into > either OS so i could share archived data between them.That''s what you should do (and I do it all the time). Put your user data a separate pool and import only that on both OS instances. So in your case, install OpenSolaris in a 32GB or more slice 0 partition of the mirror and /export on (say) slice 1. My data pool is called "space" and it has a number of file systems most of which are mounted on /export (e.g., /export/home/userz for user "userz". You could do this by zfs snap of the OpenSolaris rpool from Solaris, and then zfs recv after running format (follow the guide for restoring a zfs rpool at http://docs.sun.com/app/docs/doc/819-5461/ghzur?a=view).> it sounds like i shouldn''t have any problem cold-cross-mounting :) > although does bug 11358 only apply to opensolaris or would it also be > possible to apply to solaris 10 too?Not sure. sxce and Open Solaris both do the dreaded archive update, so AFAIK Solaris 10 would do it too, possibly with bad consequences. A workaround would be to make sure the other rpool is not mounted when you reboot, but one whoops and you might be toast. Better to keep data and OS separate. Then you can do zfs snaps for rpool backups and something different if you like for user data backups.> also i thought i read in the doco that ZFS assigns an id to each > drive which is unique to the OS - if i try to mount it into another > OS would this id keep changing each time i switch?AFAIK it doesn''t. I have sxce and OpenSolaris running alternately on one host and they mount the data pool with no problems at all. I no longer even try to cross mount the rpools because my OpenSolaris installs kept getting trashed by 11358, but at that time sxce was on UFS. I believe the ids are assigned when the pool is created, so if you zfs recv an rpool from another host with an otherwise identical configuration, it will try (and correctly fail) to mount a zombie data pool when you boot it. I assume the id is ignored on the root pool at boot time or it wouldn''t be able to boot at all. Undoubtedly a guru will chip in here if this is incorrect :-) HTH -- Frank
Frank Middleton
2009-Oct-16 13:53 UTC
[zfs-discuss] Mount ZFS on Dual Boot Machine (open)solaris
On 10/16/09 09:29, I wrote:> > I assume the id is ignored on the root pool at boot time or it > wouldn''t be able to boot at all. Undoubtedly a guru will chip in here > if this is incorrect :-)Of course this was hogwash. You create the pool before receiving the snapshot, so the ID is local. One of the many nice things about ZFS is that it is so logically consistent. I''d never want to go back! Cheers -- Frank