Chris Mosetick
2010-Sep-10 00:56 UTC
[zfs-discuss] zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected a few, apologies in advance. A couple questions. First I have a physical host (call him bob) that was just installed with b134 a few days ago. I upgraded to b145 using the instructions on the Illumos wiki yesterday. The pool has been upgraded (27) and the zfs file systems have been upgraded (5). chris at bob:~# zpool upgrade rpool This system is currently running ZFS pool version 27. Pool ''rpool'' is already formatted using the current version. chris at bob:~# zfs upgrade rpool 7 file systems upgraded The file systems have been upgraded according to "zfs get version rpool" Looks ok to me. However, I now get an error when I run zdb -D. I can''t remember exactly when I turned dedup on, but I moved some data on rpool, and "zpool list" shows 1.74x ratio. chris at bob:~# zdb -D rpool zdb: can''t open ''rpool'': No such file or directory Also, running zdb by itself, returns expected output, but still says my rpool is version 22. Is that expected? I never ran zdb before the upgrade, since it was a clean install from the b134 iso to go straight to b145. One thing I will mention is that the hostname of the machine was changed too (using these instructions<http://wiki.genunix.org/wiki/index.php/Change_hostname_HOWTO>). bob used to be eric. I don''t know if that matters, but I can''t open up the "Users and Groups" from Gnome anymore, *"unable to su"* so something is still not right there. Moving on, I have another fresh install of b134 from iso inside a virtualbox virtual machine, on a total different physical machine. This machine is named weston and was upgraded to b145 using the same Illumos wiki instructions. His name has never changed. When I run the same zdb -D command I get the expected output. chris at weston:~# zdb -D rpool DDT-sha256-zap-unique: 11 entries, size 558 on disk, 744 in core dedup = 1.00, compress = 7.51, copies = 1.00, dedup * compress / copies 7.51 However, after zpool and zfs upgrades *on both machines*, they still say the rpool is version 22. Is that expected/correct? I added a new virtual disk to the vm weston to see what would happen if I made a new pool on the new disk. chris at weston:~# zpool create test c5t1d0 Well, the new "test" pool shows version 27, but rpool is still listed at 22 by zdb. Is this expected /correct behavior? See the output below to see the rpool and test pool version numbers according to zdb on the host weston. Can anyone provide any insight into what I''m seeing? Do I need to delete my b134 boot environments for rpool to show as version 27 in zdb? Why does zdb -D rpool give me can''t open on the host bob? Thank you in advance, -Chris chris at weston:~# zdb rpool: version: 22 name: ''rpool'' state: 0 txg: 7254 pool_guid: 17616386148370290153 hostid: 8413798 hostname: ''weston'' vdev_children: 1 vdev_tree: type: ''root'' id: 0 guid: 17616386148370290153 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 14826633751084073618 path: ''/dev/dsk/c5t0d0s0'' devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBf6ff53d9-49330fdb/a'' phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 0,0:a'' whole_disk: 0 metaslab_array: 23 metaslab_shift: 28 ashift: 9 asize: 32172408832 is_log: 0 create_txg: 4 test: version: 27 name: ''test'' state: 0 txg: 26 pool_guid: 13455895622924169480 hostid: 8413798 hostname: ''weston'' vdev_children: 1 vdev_tree: type: ''root'' id: 0 guid: 13455895622924169480 create_txg: 4 children[0]: type: ''disk'' id: 0 guid: 7436238939623596891 path: ''/dev/dsk/c5t1d0s0'' devid: ''id1,sd at SATA_____VBOX_HARDDISK____VBa371da65-169e72ea/a'' phys_path: ''/pci at 0,0/pci8086,2829 at d/disk at 1,0:a'' whole_disk: 1 metaslab_array: 30 metaslab_shift: 24 ashift: 9 asize: 3207856128 is_log: 0 create_txg: 4 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100909/2a57debb/attachment-0001.html>
Chris Mosetick
2010-Sep-29 09:44 UTC
[zfs-discuss] [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145
Hi Cindy, I did see your first email pointing to that bug<http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538600>. Apologies for not addressing it earlier. It is my opinion that the behavior Mike, and I <http://illumos.org/issues/217> (or anyone else upgrading pools right now) is seeing is a entirely new and different bug. The bug you point to, originally submitted in 2007 says it manifests itself before a reboot. Also you say exporting and importing clear the problem. After several reboots, zdb still shows the older pool version, which means that this is a new bug or perhaps the bug you are referencing is not listing clearly and accurately what it should be and is incomplete. Suppose an export and import can update the pool label config on a large storage pool, great. How would someone go about exporting the rpool the operating system is on?? As far as I know, It''s impossible to export the zpool the operating system is running on. I don''t think it can be done, but I''m new so maybe I''m missing something. One option I have not explored that might work: Booting to a live CD that has the same or higher pool version present and then doing: zpool import && zpool import -f rpool && zpool export rpool and then rebooting into the operating system. Perhaps this might be an option that "works" to update the label config / zdb for rpool but I think fixing the root problem would be much more beneficial for everyone in the long run. Being that zdb is a troubleshooting/debugging tool, I would think that it''s necessary for it to be aware of the proper pool version to work properly and so admins know what''s really going on with their pools. The bottom line here is that if zdb is going to be part of zfs, it needs to display what is currently on disk, including the label config. If I were an admin thinking about trusting hundreds of GB''s of data to zfs I would want the debugger to show me whats really on the disks. Additionally, even though zpool and zfs "get version" display the true and updated versions, I''m not convinced that the problem is zdb, as the label config is almost certainly set by the zpool and/or zfs commands. Somewhere, something is not happening that is supposed to when initiating a zpool upgrade, but since I know virtually nothing of the internals of zfs, I do not know where. Sincerely, -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/35fa47b1/attachment.html>
Casper.Dik at Sun.COM
2010-Sep-29 10:03 UTC
[zfs-discuss] [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145
>Additionally, even though zpool and zfs "get version" display the true and >updated versions, I''m not convinced that the problem is zdb, as the label >config is almost certainly set by the zpool and/or zfs commands. Somewhere, >something is not happening that is supposed to when initiating a zpool >upgrade, but since I know virtually nothing of the internals of zfs, I doThe problem is likely in the boot block or in grub. The development version did not update the boot block; newer versions of beadm do fix boot blocks. For now, I''d recommend you upgrade the boot block on all halves of a bootable mirror before you upgrade the zpool version or the zfs version. export/import won''t help. Casper
Chris Mosetick
2010-Sep-29 10:46 UTC
[zfs-discuss] [osol-discuss] [illumos-Developer] zpool upgrade and zfs upgrade behavior on b145
Well strangely enough, I just logged into a OS b145 machine. It''s rpool is not mirrored, just a single disk. I know that zdb reported zpool version 22 after at least the first 3 reboots after rpool upgrade, so I stopped checking. zdb now reports version 27. This machine has probably been rebooted about five or six times since the pool version upgrade. One should not have to reboot six times! More mystery to this pool upgrade behavior!! -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100929/713fee3d/attachment.html>