We have having some issues in copying the existing data on our Sol 11 snv_70b x4500 to the new Sol 10 5/08 x4500. With all the panics, and crashes, we have had no chance to completely copy a single filesystem. (ETA for that is about 48 hours). What are the chances that I can zpool import all filesystems if I were to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500 and reboot? I assume Sol10 5/08 zpool version would be newer, so in theory it would work. Comments? -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
I''d expect that to work personally, although I''d just drop one of your boot mirrors in myself. That leaves the second drive untouched for your other server. It also means that if it works you could just wipe the old snv_70b and re-establish the boot mirrors on each server with them. This message posted from opensolaris.org
Wipe the snv_70b disks I meant. This message posted from opensolaris.org
I am currently thinking that it will not work. I found this situation happened : x4500-01# zfs send zpool1/cgi at replication | nc -v x4500-02 3334 x4500-02# nc -l -p 3333 -vvv | zfs recv -v zpool1/www x4500-02# cannot mount ''zpool1/www'': Operation not supported Mismatched versions: File system is version 2 on-disk format, which is incompatible with this software version 1!cannot mount ''zpool1/www'': Operation not supported Bluntly, we are screwed. It is rsync, or nothing. Lund Ross wrote:> I''d expect that to work personally, although I''d just drop one of your boot mirrors in myself. That leaves the second drive untouched for your other server. It also means that if it works you could just wipe the old snv_70b and re-establish the boot mirrors on each server with them. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Ross wrote:> Wipe the snv_70b disks I meant. > >What disks? This message makes no sense without context. Context free messages are a pain in the arse for those of us who use the mail list. Ian
But zfs send/receive is very different to zfs import. I''m not sure if zfs send/receive work across different versions of zfs, I vaguely remember reading something about it not working, but can''t find anything specific about it right now. I do think a zfs import after booting from the new drives should work fine, and it doesn''t automatically upgrade the pool, so you can still go back to snv_70b if needed. After all, if zfs import did change the version, the zfs upgrade command would be redundant. See the following lines from the zfs manual: zpool import [-d dir] [-D] [-f] [-o opts] [-R root] pool | id [newpool] Imports a specific pool. A pool can be identified by its name or the numeric identifier. If newpool is specified, the pool is imported using the name newpool. Otherwise, it is imported with the same name as its exported name. If a device is removed from a system without running ?zpool export? first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the -f option is required. zpool upgrade Displays all pools formatted using a different ZFS on-disk version. Older versions can continue to be used, but some features may not be available. These pools can be upgraded using ?zpool upgrade -a?. Pools that are formatted with a more recent version are also displayed, although these pools will be inaccessible on the system. Ross PS. In your first post you said you had no time to copy the filesystem, so why are you trying to use send/receive? Both rsync and send/receive will take a long time to complete. This message posted from opensolaris.org
Ross wrote:> I do think a zfs import after booting from the new drives should> work fine, and it doesn''t automatically upgrade the pool, > so you can still go back to snv_70b if needed. Alas, it would be downgrade. Which is why I think it will fail.> > PS. In your first post you said you had no time to copy the filesystem, so why are you trying to use send/receive? Both rsync and send/receive will take a long time to complete. > >zfs send of the /zvol/ufs volume would take 2 days. Currently it panics at least once a day. There appears to be no way to resume a "half transfered" zfs send. So, rsyncing smaller bits. zfs send -i only works if you have a full copy already, which we can''t get from above. -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
> > so you can still go back to snv_70b if needed. > > Alas, it would be downgrade. Which is why I think it > will fail.Not if you don''t upgrade the pool it won''t. ZFS can import and work with an old version of the filesystem fine. The manual page for zpool upgrade says: "Older versions can continue to be used" Just import it on Solaris 5/08 without doing the upgrade. Your ZFS pool will be available and can be served out from the new version. If you do find any problems (which I wouldn''t expect to be honest), you can plug your old snv_70b boot disk in if necessary.> zfs send of the /zvol/ufs volume would take 2 days. Currently it panics > at least once a day. There appears to be no way to resume a "half > transfered" zfs send. So, rsyncing smaller bits.Aaah, that makes sense now. I don''t think you need to do this though, I really think your idea of swapping the boot disks is the best way of getting this server up & running. The absolute worst case scenario is that Solaris 5/08 also crashes on the old Thumper which means you have faulty hardware. If that happens you''ll probably need to move your data drives to the new chassis and hope it''s not a bad drive causing the fault. Either way, let me know how you get on. Ross This message posted from opensolaris.org
Ross wrote:> Not if you don''t upgrade the pool it won''t. ZFS can import and work with an old version of the filesystem fine. The manual page for zpool upgrade says: > "Older versions can continue to be used" > > Just import it on Solaris 5/08 without doing the upgrade. Your ZFS pool will be available and can be served out from the new version. If you do find any problems (which I wouldn''t expect to be honest), you can plug your old snv_70b boot disk in if necessary.Now/old server is ZFS version 2 zfs. The new boot HDDs/OS, are only ZFS version 1. I do not think zfs version 1 will read version 2. I see no script talking about converting a version 2 to a version 1. -- Jorgen Lundman | <lundman at lundman.net> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell) Japan | +81 (0)3 -3375-1767 (home)
Sorry Ian, I was posting on the forum and missed the word "disks" from my previous post. I''m still not used to Sun''s mutant cross of a message board / mailing list. Ross> Date: Fri, 1 Aug 2008 21:08:08 +1200> From: ian at ianshome.com> To: myxiplx at hotmail.com> CC: zfs-discuss at opensolaris.org> Subject: Re: [zfs-discuss] Replacing the boot HDDs in x4500> > Ross wrote:> > Wipe the snv_70b disks I meant.> > > > > What disks? This message makes no sense without context.> > Context free messages are a pain in the arse for those of us who use the> mail list.> > Ian_________________________________________________________________ Make a mini you on Windows Live Messenger! http://clk.atdmt.com/UKM/go/107571437/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080801/39f94d76/attachment.html>