I would like to clone the configuration on a v210 with snv_115. The current pool looks like this: -bash-3.2$ /usr/sbin/zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to /tmp/a so that I can make the changes I need prior to removing the drive and putting it into the new v210. I supose I could lucreate -n new_v210, lumount new_v210, edit what I need to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 and then luactivate the original boot environment. -- This message posted from opensolaris.org
Hi Karl, Manually cloning the root pool is difficult. We have a root pool recovery procedure that you might be able to apply as long as the systems are identical. I would not attempt this with LiveUpgrade and manually tweaking. http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery The problem is that the amount system-specific info stored in the root pool and any kind of device differences might be insurmountable. Solaris 10 ZFS/flash archive support is available with patches but not for the Nevada release. The ZFS team is working on a split-mirrored-pool feature and that might be an option for future root pool cloning. If you''re still interested in a manual process, see the steps below attempted by another community member who moved his root pool to a larger disk on the same system. This is probably more than you wanted to know... Cindy # zpool create -f altrpool c1t1d0s0 # zpool set listsnapshots=on rpool # SNAPNAME=`date +%Y%m%d` # zfs snapshot -r rpool/ROOT@$SNAPNAME # zfs list -t snapshot # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0 for x86 do # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 Set the bootfs property on the root pool BE. # zpool set bootfs=altrpool/ROOT/zfsBE altrpool # zpool export altrpool # init 5 remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 -insert solaris10 dvd ok boot cdrom -s # zpool import altrpool rpool # init 0 ok boot disk1 On 09/24/09 10:06, Karl Rossing wrote:> I would like to clone the configuration on a v210 with snv_115. > > The current pool looks like this: > > -bash-3.2$ /usr/sbin/zpool status > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c1t0d0s0 ONLINE 0 0 0 > c1t1d0s0 ONLINE 0 0 0 > > errors: No known data errors > > After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to /tmp/a so that I can make the changes I need prior to removing the drive and putting it into the new v210. > > I supose I could lucreate -n new_v210, lumount new_v210, edit what I need to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 and then luactivate the original boot environment.
Hi Cindy, Could you provide a list of system specific info stored in the root pool? Thanks Peter 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>:> Hi Karl, > > Manually cloning the root pool is difficult. We have a root pool recovery > procedure that you might be able to apply as long as the > systems are identical. I would not attempt this with LiveUpgrade > and manually tweaking. > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery > > The problem is that the amount system-specific info stored in the root > pool and any kind of device differences might be insurmountable. > > Solaris 10 ZFS/flash archive support is available with patches but not > for the Nevada release. > > The ZFS team is working on a split-mirrored-pool feature and that might > be an option for future root pool cloning. > > If you''re still interested in a manual process, see the steps below > attempted by another community member who moved his root pool to a > larger disk on the same system. > > This is probably more than you wanted to know... > > Cindy > > > > # zpool create -f altrpool c1t1d0s0 > # zpool set listsnapshots=on rpool > # SNAPNAME=`date +%Y%m%d` > # zfs snapshot -r rpool/ROOT@$SNAPNAME > # zfs list -t snapshot > # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool > # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk > /dev/rdsk/c1t1d0s0 > for x86 do > # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 > Set the bootfs property on the root pool BE. > # zpool set bootfs=altrpool/ROOT/zfsBE altrpool > # zpool export altrpool > # init 5 > remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 > -insert solaris10 dvd > ok boot cdrom -s > # zpool import altrpool rpool > # init 0 > ok boot disk1 > > On 09/24/09 10:06, Karl Rossing wrote: >> >> I would like to clone the configuration on a v210 with snv_115. >> >> The current pool looks like this: >> >> -bash-3.2$ /usr/sbin/zpool status ? ?pool: rpool >> ?state: ONLINE >> ?scrub: none requested >> config: >> >> ? ? ? ?NAME ? ? ? ? ?STATE ? ? READ WRITE CKSUM >> ? ? ? ?rpool ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ?mirror ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?c1t0d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> ? ? ? ? ? ?c1t1d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >> >> errors: No known data errors >> >> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to >> /tmp/a so that I can make the changes I need prior to removing the drive and >> putting it into the new v210. >> >> I supose I could lucreate -n new_v210, lumount new_v210, edit what I need >> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 and >> then luactivate the original boot environment. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
As Cindy said, this isn''t trivial right now. Personally, I''d do it this way: ASSUMPTIONS: * both v210 machines are reasonably identical (may differ in RAM or CPU speed, but nothing much else). * Call the original machine A and the new machine B * machine B has no current drives in it. METHOD: 1) In A, Install the boot block on c1t1 as Cindy detailed below (installboot ..... ) 2) shutdown A 3) remove c1t0 from A (that is, the original boot drive) 4) boot A from c1t1 (you will likely have to do this at the boot prom, via something like ''boot disk2'' ) 5) once A is back up, make the changes you need to make A look like what B should be. Note that ZFS will mark c1t0 as Failed. 6) shutdown A, remove c1t1, and move it to B, putting it in the c1t1 disk slot (i.e. the 2nd slot) 7) boot B, in the same manner you did A a minute ago (boot disk2) 8) when B is up, insert a new drive into the c1t0 slot, and do a ''zpool replace rpool c1t0d0 c1t0d0'' 9) after the resilver completes, do an ''installboot'' on c1t0 10) reboot B, and everything should be set. 11) on A, re-insert the original c1t0 into it''s standard place (i.e. it should remain c1t0) 12) boot A 13) insert a fresh drive into the c1t1 slot 14) zpool replace rpool c1t1d0 c1t1d0 15) installboot after resilver Note that I''ve not specifically tried the above, but I can''t see any reason why it shouldn''t work. -Erik Cindy Swearingen wrote:> Hi Karl, > > Manually cloning the root pool is difficult. We have a root pool > recovery procedure that you might be able to apply as long as the > systems are identical. I would not attempt this with LiveUpgrade > and manually tweaking. > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery > > > The problem is that the amount system-specific info stored in the root > pool and any kind of device differences might be insurmountable. > > Solaris 10 ZFS/flash archive support is available with patches but not > for the Nevada release. > > The ZFS team is working on a split-mirrored-pool feature and that might > be an option for future root pool cloning. > > If you''re still interested in a manual process, see the steps below > attempted by another community member who moved his root pool to a > larger disk on the same system. > > This is probably more than you wanted to know... > > Cindy > > > > # zpool create -f altrpool c1t1d0s0 > # zpool set listsnapshots=on rpool > # SNAPNAME=`date +%Y%m%d` > # zfs snapshot -r rpool/ROOT@$SNAPNAME > # zfs list -t snapshot > # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool > # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk > /dev/rdsk/c1t1d0s0 > for x86 do > # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 > Set the bootfs property on the root pool BE. > # zpool set bootfs=altrpool/ROOT/zfsBE altrpool > # zpool export altrpool > # init 5 > remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 > -insert solaris10 dvd > ok boot cdrom -s > # zpool import altrpool rpool > # init 0 > ok boot disk1 > > On 09/24/09 10:06, Karl Rossing wrote: >> I would like to clone the configuration on a v210 with snv_115. >> >> The current pool looks like this: >> >> -bash-3.2$ /usr/sbin/zpool status pool: rpool >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> rpool ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >> c1t0d0s0 ONLINE 0 0 0 >> c1t1d0s0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 >> to /tmp/a so that I can make the changes I need prior to removing the >> drive and putting it into the new v210. >> >> I supose I could lucreate -n new_v210, lumount new_v210, edit what I >> need to, luumount new_v210, luactivate new_v210, zpool detach rpool >> c1t1d0s0 and then luactivate the original boot environment. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA
Hi Peter, I can''t provide it because I don''t know what it is. Even if we could provide a list of items, tweaking the device informaton if the systems are not identical would be too difficult. cs On 09/24/09 12:04, Peter Pickford wrote:> Hi Cindy, > > Could you provide a list of system specific info stored in the root pool? > > Thanks > > Peter > > 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: >> Hi Karl, >> >> Manually cloning the root pool is difficult. We have a root pool recovery >> procedure that you might be able to apply as long as the >> systems are identical. I would not attempt this with LiveUpgrade >> and manually tweaking. >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery >> >> The problem is that the amount system-specific info stored in the root >> pool and any kind of device differences might be insurmountable. >> >> Solaris 10 ZFS/flash archive support is available with patches but not >> for the Nevada release. >> >> The ZFS team is working on a split-mirrored-pool feature and that might >> be an option for future root pool cloning. >> >> If you''re still interested in a manual process, see the steps below >> attempted by another community member who moved his root pool to a >> larger disk on the same system. >> >> This is probably more than you wanted to know... >> >> Cindy >> >> >> >> # zpool create -f altrpool c1t1d0s0 >> # zpool set listsnapshots=on rpool >> # SNAPNAME=`date +%Y%m%d` >> # zfs snapshot -r rpool/ROOT@$SNAPNAME >> # zfs list -t snapshot >> # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool >> # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk >> /dev/rdsk/c1t1d0s0 >> for x86 do >> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 >> Set the bootfs property on the root pool BE. >> # zpool set bootfs=altrpool/ROOT/zfsBE altrpool >> # zpool export altrpool >> # init 5 >> remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 >> -insert solaris10 dvd >> ok boot cdrom -s >> # zpool import altrpool rpool >> # init 0 >> ok boot disk1 >> >> On 09/24/09 10:06, Karl Rossing wrote: >>> I would like to clone the configuration on a v210 with snv_115. >>> >>> The current pool looks like this: >>> >>> -bash-3.2$ /usr/sbin/zpool status pool: rpool >>> state: ONLINE >>> scrub: none requested >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> rpool ONLINE 0 0 0 >>> mirror ONLINE 0 0 0 >>> c1t0d0s0 ONLINE 0 0 0 >>> c1t1d0s0 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to >>> /tmp/a so that I can make the changes I need prior to removing the drive and >>> putting it into the new v210. >>> >>> I supose I could lucreate -n new_v210, lumount new_v210, edit what I need >>> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 and >>> then luactivate the original boot environment. >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>
Thanks for the help. Since the v210''s in question are at a remote site. It might be a bit of a pain getting the drives swapped by end users. So I thought of something else. Could I netboot the new v210 with snv_115, use zfs send/receive with ssh to grab the data on the old server, install the boot block, import the pool, make the changes I need and reboot the system? -- This message posted from opensolaris.org
Karl, I''m not sure I''m following everything. If you can''t swap the drives, the which pool would you import? If you install the new v210 with snv_115, then you would have a bootable root pool. You could then receive the snapshots from the old root pool into the root pool on the new v210. I would practice the snapshot/send/recv''ing process if you are not familiar with it before you attempt the migration. Cindy On 09/24/09 12:39, Karl Rossing wrote:> Thanks for the help. > > Since the v210''s in question are at a remote site. It might be a bit of a pain getting the drives swapped by end users. > > So I thought of something else. Could I netboot the new v210 with snv_115, use zfs send/receive with ssh to grab the data on the old server, install the boot block, import the pool, make the changes I need and reboot the system?
Hi Cindy, Wouldn''t touch /reconfigure mv /etc/path_to_inst* /var/tmp/ regenerate all device information? AFIK zfs doesn''t care about the device names it scans for them it would only affect things like vfstab. I did a restore from a E2900 to V890 and is seemed to work Created the pool and zfs recieve. I would like to be able to have a zfs send of a minimal build and install it in an abe and activate it. I tried that is test and it seems to work. It seems to work but IM just wondering what I may have missed. I saw someone else has done this on the list and was going to write a blog. It seems like a good way to get a minimal install on a server with reduced downtime. Now if I just knew how to run the installer in and abe without there being an OS there already that would be cool too. Thanks Peter 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>:> Hi Peter, > > I can''t provide it because I don''t know what it is. > > Even if we could provide a list of items, tweaking > the device informaton if the systems are not identical > would be too difficult. > > cs > > On 09/24/09 12:04, Peter Pickford wrote: >> >> Hi Cindy, >> >> Could you provide a list of system specific info stored in the root pool? >> >> Thanks >> >> Peter >> >> 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: >>> >>> Hi Karl, >>> >>> Manually cloning the root pool is difficult. We have a root pool recovery >>> procedure that you might be able to apply as long as the >>> systems are identical. I would not attempt this with LiveUpgrade >>> and manually tweaking. >>> >>> >>> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery >>> >>> The problem is that the amount system-specific info stored in the root >>> pool and any kind of device differences might be insurmountable. >>> >>> Solaris 10 ZFS/flash archive support is available with patches but not >>> for the Nevada release. >>> >>> The ZFS team is working on a split-mirrored-pool feature and that might >>> be an option for future root pool cloning. >>> >>> If you''re still interested in a manual process, see the steps below >>> attempted by another community member who moved his root pool to a >>> larger disk on the same system. >>> >>> This is probably more than you wanted to know... >>> >>> Cindy >>> >>> >>> >>> # zpool create -f altrpool c1t1d0s0 >>> # zpool set listsnapshots=on rpool >>> # SNAPNAME=`date +%Y%m%d` >>> # zfs snapshot -r rpool/ROOT@$SNAPNAME >>> # zfs list -t snapshot >>> # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool >>> # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk >>> /dev/rdsk/c1t1d0s0 >>> for x86 do >>> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 >>> Set the bootfs property on the root pool BE. >>> # zpool set bootfs=altrpool/ROOT/zfsBE altrpool >>> # zpool export altrpool >>> # init 5 >>> remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 >>> -insert solaris10 dvd >>> ok boot cdrom -s >>> # zpool import altrpool rpool >>> # init 0 >>> ok boot disk1 >>> >>> On 09/24/09 10:06, Karl Rossing wrote: >>>> >>>> I would like to clone the configuration on a v210 with snv_115. >>>> >>>> The current pool looks like this: >>>> >>>> -bash-3.2$ /usr/sbin/zpool status ? ?pool: rpool >>>> ?state: ONLINE >>>> ?scrub: none requested >>>> config: >>>> >>>> ? ? ? NAME ? ? ? ? ?STATE ? ? READ WRITE CKSUM >>>> ? ? ? rpool ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 >>>> ? ? ? ? mirror ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >>>> ? ? ? ? ? c1t0d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >>>> ? ? ? ? ? c1t1d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 >>>> >>>> errors: No known data errors >>>> >>>> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to >>>> /tmp/a so that I can make the changes I need prior to removing the drive >>>> and >>>> putting it into the new v210. >>>> >>>> I supose I could lucreate -n new_v210, lumount new_v210, edit what I >>>> need >>>> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 >>>> and >>>> then luactivate the original boot environment. >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >
On 09/24/09 15:54, Peter Pickford wrote:> Hi Cindy, > > Wouldn''t > > touch /reconfigure > mv /etc/path_to_inst* /var/tmp/ > > regenerate all device information? >It might, but it''s hard to say whether that would accomplish everything needed to move a root file system from one system to another. I just got done modifying flash archive support to work with zfs root on Solaris 10 Update 8. For those not familiar with it, "flash archives" are a way to clone full boot environments across multiple machines. The S10 Solaris installer knows how to install one of these flash archives on a system and then do all the customizations to adapt it to the local hardware and local network environment. I''m pretty sure there''s more to the customization than just a device reconfiguration. So feel free to hack together your own solution. It might work for you, but don''t assume that you''ve come up with a completely general way to clone root pools. lori> AFIK zfs doesn''t care about the device names it scans for them > it would only affect things like vfstab. > > I did a restore from a E2900 to V890 and is seemed to work > > Created the pool and zfs recieve. > > I would like to be able to have a zfs send of a minimal build and > install it in an abe and activate it. > I tried that is test and it seems to work. > > It seems to work but IM just wondering what I may have missed. > > I saw someone else has done this on the list and was going to write a blog. > > It seems like a good way to get a minimal install on a server with > reduced downtime. > > Now if I just knew how to run the installer in and abe without there > being an OS there already that would be cool too. > > Thanks > > Peter > > 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: > >> Hi Peter, >> >> I can''t provide it because I don''t know what it is. >> >> Even if we could provide a list of items, tweaking >> the device informaton if the systems are not identical >> would be too difficult. >> >> cs >> >> On 09/24/09 12:04, Peter Pickford wrote: >> >>> Hi Cindy, >>> >>> Could you provide a list of system specific info stored in the root pool? >>> >>> Thanks >>> >>> Peter >>> >>> 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: >>> >>>> Hi Karl, >>>> >>>> Manually cloning the root pool is difficult. We have a root pool recovery >>>> procedure that you might be able to apply as long as the >>>> systems are identical. I would not attempt this with LiveUpgrade >>>> and manually tweaking. >>>> >>>> >>>> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery >>>> >>>> The problem is that the amount system-specific info stored in the root >>>> pool and any kind of device differences might be insurmountable. >>>> >>>> Solaris 10 ZFS/flash archive support is available with patches but not >>>> for the Nevada release. >>>> >>>> The ZFS team is working on a split-mirrored-pool feature and that might >>>> be an option for future root pool cloning. >>>> >>>> If you''re still interested in a manual process, see the steps below >>>> attempted by another community member who moved his root pool to a >>>> larger disk on the same system. >>>> >>>> This is probably more than you wanted to know... >>>> >>>> Cindy >>>> >>>> >>>> >>>> # zpool create -f altrpool c1t1d0s0 >>>> # zpool set listsnapshots=on rpool >>>> # SNAPNAME=`date +%Y%m%d` >>>> # zfs snapshot -r rpool/ROOT@$SNAPNAME >>>> # zfs list -t snapshot >>>> # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool >>>> # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk >>>> /dev/rdsk/c1t1d0s0 >>>> for x86 do >>>> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 >>>> Set the bootfs property on the root pool BE. >>>> # zpool set bootfs=altrpool/ROOT/zfsBE altrpool >>>> # zpool export altrpool >>>> # init 5 >>>> remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 >>>> -insert solaris10 dvd >>>> ok boot cdrom -s >>>> # zpool import altrpool rpool >>>> # init 0 >>>> ok boot disk1 >>>> >>>> On 09/24/09 10:06, Karl Rossing wrote: >>>> >>>>> I would like to clone the configuration on a v210 with snv_115. >>>>> >>>>> The current pool looks like this: >>>>> >>>>> -bash-3.2$ /usr/sbin/zpool status pool: rpool >>>>> state: ONLINE >>>>> scrub: none requested >>>>> config: >>>>> >>>>> NAME STATE READ WRITE CKSUM >>>>> rpool ONLINE 0 0 0 >>>>> mirror ONLINE 0 0 0 >>>>> c1t0d0s0 ONLINE 0 0 0 >>>>> c1t1d0s0 ONLINE 0 0 0 >>>>> >>>>> errors: No known data errors >>>>> >>>>> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to >>>>> /tmp/a so that I can make the changes I need prior to removing the drive >>>>> and >>>>> putting it into the new v210. >>>>> >>>>> I supose I could lucreate -n new_v210, lumount new_v210, edit what I >>>>> need >>>>> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 >>>>> and >>>>> then luactivate the original boot environment. >>>>> >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>>> >>>> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090924/43618d2c/attachment.html>
Hi Lori, Is the u8 flash support for the whole root pool or an individual BE using live upgrade? Thanks Peter 2009/9/24 Lori Alt <Lori.Alt at sun.com>:> On 09/24/09 15:54, Peter Pickford wrote: > > Hi Cindy, > > Wouldn''t > > touch /reconfigure > mv /etc/path_to_inst* /var/tmp/ > > regenerate all device information? > > > It might, but it''s hard to say whether that would accomplish everything > needed to move a root file system from one system to another. > > I just got done modifying flash archive support to work with zfs root on > Solaris 10 Update 8.? For those not familiar with it, "flash archives" are a > way to clone full boot environments across multiple machines.? The S10 > Solaris installer knows how to install one of these flash archives on a > system and then do all the customizations to adapt it to the? local hardware > and local network environment.? I''m pretty sure there''s more to the > customization than just a device reconfiguration. > > So feel free to hack together your own solution.? It might work for you, but > don''t assume that you''ve come up with a completely general way to clone root > pools. > > lori > > AFIK zfs doesn''t care about the device names it scans for them > it would only affect things like vfstab. > > I did a restore from a E2900 to V890 and is seemed to work > > Created the pool and zfs recieve. > > I would like to be able to have a zfs send of a minimal build and > install it in an abe and activate it. > I tried that is test and it seems to work. > > It seems to work but IM just wondering what I may have missed. > > I saw someone else has done this on the list and was going to write a blog. > > It seems like a good way to get a minimal install on a server with > reduced downtime. > > Now if I just knew how to run the installer in and abe without there > being an OS there already that would be cool too. > > Thanks > > Peter > > 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: > > > Hi Peter, > > I can''t provide it because I don''t know what it is. > > Even if we could provide a list of items, tweaking > the device informaton if the systems are not identical > would be too difficult. > > cs > > On 09/24/09 12:04, Peter Pickford wrote: > > > Hi Cindy, > > Could you provide a list of system specific info stored in the root pool? > > Thanks > > Peter > > 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: > > > Hi Karl, > > Manually cloning the root pool is difficult. We have a root pool recovery > procedure that you might be able to apply as long as the > systems are identical. I would not attempt this with LiveUpgrade > and manually tweaking. > > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery > > The problem is that the amount system-specific info stored in the root > pool and any kind of device differences might be insurmountable. > > Solaris 10 ZFS/flash archive support is available with patches but not > for the Nevada release. > > The ZFS team is working on a split-mirrored-pool feature and that might > be an option for future root pool cloning. > > If you''re still interested in a manual process, see the steps below > attempted by another community member who moved his root pool to a > larger disk on the same system. > > This is probably more than you wanted to know... > > Cindy > > > > # zpool create -f altrpool c1t1d0s0 > # zpool set listsnapshots=on rpool > # SNAPNAME=`date +%Y%m%d` > # zfs snapshot -r rpool/ROOT@$SNAPNAME > # zfs list -t snapshot > # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool > # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk > /dev/rdsk/c1t1d0s0 > for x86 do > # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 > Set the bootfs property on the root pool BE. > # zpool set bootfs=altrpool/ROOT/zfsBE altrpool > # zpool export altrpool > # init 5 > remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 > -insert solaris10 dvd > ok boot cdrom -s > # zpool import altrpool rpool > # init 0 > ok boot disk1 > > On 09/24/09 10:06, Karl Rossing wrote: > > > I would like to clone the configuration on a v210 with snv_115. > > The current pool looks like this: > > -bash-3.2$ /usr/sbin/zpool status ? ?pool: rpool > ?state: ONLINE > ?scrub: none requested > config: > > ? ? ? NAME ? ? ? ? ?STATE ? ? READ WRITE CKSUM > ? ? ? rpool ? ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? mirror ? ? ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? c1t0d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ? ? ? ? ? c1t1d0s0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > > errors: No known data errors > > After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to > /tmp/a so that I can make the changes I need prior to removing the drive > and > putting it into the new v210. > > I supose I could lucreate -n new_v210, lumount new_v210, edit what I > need > to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 > and > then luactivate the original boot environment. > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
The whole pool. Although you can choose to exclude individual datasets from the flar when creating it. lori On 09/25/09 12:03, Peter Pickford wrote:> Hi Lori, > > Is the u8 flash support for the whole root pool or an individual BE > using live upgrade? > > Thanks > > Peter > > 2009/9/24 Lori Alt <Lori.Alt at sun.com>: > >> On 09/24/09 15:54, Peter Pickford wrote: >> >> Hi Cindy, >> >> Wouldn''t >> >> touch /reconfigure >> mv /etc/path_to_inst* /var/tmp/ >> >> regenerate all device information? >> >> >> It might, but it''s hard to say whether that would accomplish everything >> needed to move a root file system from one system to another. >> >> I just got done modifying flash archive support to work with zfs root on >> Solaris 10 Update 8. For those not familiar with it, "flash archives" are a >> way to clone full boot environments across multiple machines. The S10 >> Solaris installer knows how to install one of these flash archives on a >> system and then do all the customizations to adapt it to the local hardware >> and local network environment. I''m pretty sure there''s more to the >> customization than just a device reconfiguration. >> >> So feel free to hack together your own solution. It might work for you, but >> don''t assume that you''ve come up with a completely general way to clone root >> pools. >> >> lori >> >> AFIK zfs doesn''t care about the device names it scans for them >> it would only affect things like vfstab. >> >> I did a restore from a E2900 to V890 and is seemed to work >> >> Created the pool and zfs recieve. >> >> I would like to be able to have a zfs send of a minimal build and >> install it in an abe and activate it. >> I tried that is test and it seems to work. >> >> It seems to work but IM just wondering what I may have missed. >> >> I saw someone else has done this on the list and was going to write a blog. >> >> It seems like a good way to get a minimal install on a server with >> reduced downtime. >> >> Now if I just knew how to run the installer in and abe without there >> being an OS there already that would be cool too. >> >> Thanks >> >> Peter >> >> 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: >> >> >> Hi Peter, >> >> I can''t provide it because I don''t know what it is. >> >> Even if we could provide a list of items, tweaking >> the device informaton if the systems are not identical >> would be too difficult. >> >> cs >> >> On 09/24/09 12:04, Peter Pickford wrote: >> >> >> Hi Cindy, >> >> Could you provide a list of system specific info stored in the root pool? >> >> Thanks >> >> Peter >> >> 2009/9/24 Cindy Swearingen <Cindy.Swearingen at sun.com>: >> >> >> Hi Karl, >> >> Manually cloning the root pool is difficult. We have a root pool recovery >> procedure that you might be able to apply as long as the >> systems are identical. I would not attempt this with LiveUpgrade >> and manually tweaking. >> >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery >> >> The problem is that the amount system-specific info stored in the root >> pool and any kind of device differences might be insurmountable. >> >> Solaris 10 ZFS/flash archive support is available with patches but not >> for the Nevada release. >> >> The ZFS team is working on a split-mirrored-pool feature and that might >> be an option for future root pool cloning. >> >> If you''re still interested in a manual process, see the steps below >> attempted by another community member who moved his root pool to a >> larger disk on the same system. >> >> This is probably more than you wanted to know... >> >> Cindy >> >> >> >> # zpool create -f altrpool c1t1d0s0 >> # zpool set listsnapshots=on rpool >> # SNAPNAME=`date +%Y%m%d` >> # zfs snapshot -r rpool/ROOT@$SNAPNAME >> # zfs list -t snapshot >> # zfs send -R rpool@$SNAPNAME | zfs recv -vFd altrpool >> # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk >> /dev/rdsk/c1t1d0s0 >> for x86 do >> # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 >> Set the bootfs property on the root pool BE. >> # zpool set bootfs=altrpool/ROOT/zfsBE altrpool >> # zpool export altrpool >> # init 5 >> remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0 >> -insert solaris10 dvd >> ok boot cdrom -s >> # zpool import altrpool rpool >> # init 0 >> ok boot disk1 >> >> On 09/24/09 10:06, Karl Rossing wrote: >> >> >> I would like to clone the configuration on a v210 with snv_115. >> >> The current pool looks like this: >> >> -bash-3.2$ /usr/sbin/zpool status pool: rpool >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> rpool ONLINE 0 0 0 >> mirror ONLINE 0 0 0 >> c1t0d0s0 ONLINE 0 0 0 >> c1t1d0s0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> After I run zpool detach rpool c1t1d0s0, how can I remount c1t1d0s0 to >> /tmp/a so that I can make the changes I need prior to removing the drive >> and >> putting it into the new v210. >> >> I supose I could lucreate -n new_v210, lumount new_v210, edit what I >> need >> to, luumount new_v210, luactivate new_v210, zpool detach rpool c1t1d0s0 >> and >> then luactivate the original boot environment. >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >>