Is RFE 4852783 (need for an equivalent to LVM2''s pvmove) likely to happen within the next year? My use-case is home user. I have 16 disks spinning, two towers of eight disks each, exporting some of them as iSCSI targets. Four disks are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and contain 12 individual filesystems. If RFE 4852783 will happen in a year, I can move the smaller disks and their data into the ZFS mirror. As they die I will replace them with pairs of ~1TB disks. I worry the RFE won''t happen because it looks 5 years old with no posted ETA. If it won''t be closed within a year, some of those 12 disks will start failing and need replacement. We find we lose one or two each year. If I added them to ZFS, I''d have to either waste money, space, power on buying undersized replacement disks, or else do silly and dangerously confusing things with slices. Therefore in that case I will leave the smaller disks out of ZFS and add only 1TB devices to these immutable vdev''s. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080616/505100ed/attachment.bin>
This is actually quite a tricky fix as obviously data and meta data have to be relocated. Although there''s been no visible activity in this bug there has been substantial design activity to allow the RFE to be easily fixed. Anyway, to answer your question, I would fully expect this RFE would be fixed within a year, but can''t guarantee it. Neil. Miles Nordin wrote:> Is RFE 4852783 (need for an equivalent to LVM2''s pvmove) likely to > happen within the next year? > > My use-case is home user. I have 16 disks spinning, two towers of > eight disks each, exporting some of them as iSCSI targets. Four disks > are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and > contain 12 individual filesystems. > > If RFE 4852783 will happen in a year, I can move the smaller disks and > their data into the ZFS mirror. As they die I will replace them with > pairs of ~1TB disks. > > I worry the RFE won''t happen because it looks 5 years old with no > posted ETA. If it won''t be closed within a year, some of those 12 > disks will start failing and need replacement. We find we lose one or > two each year. If I added them to ZFS, I''d have to either waste > money, space, power on buying undersized replacement disks, or else do > silly and dangerously confusing things with slices. Therefore in that > case I will leave the smaller disks out of ZFS and add only 1TB > devices to these immutable vdev''s. > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why would you have to buy smaller disks? You can replace the 320''s with 1tb drives and after the last 320 is out of the raidgroup, it will grow automatically. On 6/16/08, Miles Nordin <carton at ivy.net> wrote:> Is RFE 4852783 (need for an equivalent to LVM2''s pvmove) likely to > happen within the next year? > > My use-case is home user. I have 16 disks spinning, two towers of > eight disks each, exporting some of them as iSCSI targets. Four disks > are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and > contain 12 individual filesystems. > > If RFE 4852783 will happen in a year, I can move the smaller disks and > their data into the ZFS mirror. As they die I will replace them with > pairs of ~1TB disks. > > I worry the RFE won''t happen because it looks 5 years old with no > posted ETA. If it won''t be closed within a year, some of those 12 > disks will start failing and need replacement. We find we lose one or > two each year. If I added them to ZFS, I''d have to either waste > money, space, power on buying undersized replacement disks, or else do > silly and dangerously confusing things with slices. Therefore in that > case I will leave the smaller disks out of ZFS and add only 1TB > devices to these immutable vdev''s. >
>>>>> "t" == Tim <tim at tcsac.net> writes:t> Why would you have to buy smaller disks? You can replace the t> 320''s with 1tb drives and after the last 320 is out of the t> raidgroup, it will grow automatically. This does work for me to grow a mirrored vdev on nevada b71. The way I found to view the size of an individual vdev was through ''zpool iostat -v'': before: -----8<----- terabithia:/# zpool iostat -v andaman 1 capacity operations bandwidth pool used avail read write read write ----------- ----- ----- ----- ----- ----- ----- andaman 2.23T 928G 0 0 0 0 mirror 926G 2.07G 0 0 0 0 c3t11d0 - - 0 0 0 0 c3t9d0 - - 0 0 0 0 mirror 681G 247G 0 0 0 0 c3t14d0 - - 0 0 0 0 c3t8d0 - - 0 0 0 0 mirror 231G 601M 0 0 0 0 c3t28d0 - - 0 0 0 0 c3t26d0 - - 0 0 0 0 mirror 231G 540M 0 0 0 0 c3t29d0 - - 0 0 0 0 c3t15d0 - - 0 0 0 0 mirror 109G 589G 0 0 0 0 c3t18d0 - - 0 0 0 0 c3t13d0 - - 0 0 0 0 mirror 100G 89.0G 0 0 0 0 c3t25d0 - - 0 0 0 0 c3t17d0 - - 0 0 0 0 ----------- ----- ----- ----- ----- ----- ----- -----8<----- terabithia:/# zpool replace andaman c3t25d0 c3t30d0 after resilver: -----8<----- terabithia:/# zpool iostat -v andaman 1 capacity operations bandwidth pool used avail read write read write ----------- ----- ----- ----- ----- ----- ----- andaman 2.23T 928G 0 0 0 0 mirror 926G 2.07G 0 0 0 0 c3t11d0 - - 0 0 0 0 c3t9d0 - - 0 0 0 0 mirror 681G 247G 0 0 0 0 c3t14d0 - - 0 0 0 0 c3t8d0 - - 0 0 0 0 mirror 231G 601M 0 0 0 0 c3t28d0 - - 0 0 0 0 c3t26d0 - - 0 0 0 0 mirror 231G 539M 0 0 0 0 c3t29d0 - - 0 0 0 0 c3t15d0 - - 0 0 0 0 mirror 109G 589G 0 0 0 0 c3t18d0 - - 0 0 0 0 c3t13d0 - - 0 0 0 0 mirror 100G 89.0G 0 0 0 0 c3t30d0 - - 0 0 0 0 c3t17d0 - - 0 0 0 0 ----------- ----- ----- ----- ----- ----- ----- -----8<----- terabithia:/# zpool export andaman terabithia:/# zpool import andaman after export/import: -----8<----- terabithia:/# zpool iostat -v andaman 1 capacity operations bandwidth pool used avail read write read write ----------- ----- ----- ----- ----- ----- ----- andaman 2.23T 971G 0 0 0 0 mirror 926G 2.07G 0 0 0 0 c3t11d0 - - 0 0 0 0 c3t9d0 - - 0 0 0 0 mirror 681G 247G 0 0 0 0 c3t14d0 - - 0 0 0 0 c3t8d0 - - 0 0 0 0 mirror 231G 601M 0 0 0 0 c3t28d0 - - 0 0 0 0 c3t26d0 - - 0 0 0 0 mirror 231G 539M 0 0 0 0 c3t29d0 - - 0 0 0 0 c3t15d0 - - 0 0 0 0 mirror 109G 589G 0 0 0 0 c3t18d0 - - 0 0 0 0 c3t13d0 - - 0 0 0 0 mirror 100G 132G 0 0 0 0 c3t30d0 - - 0 0 0 0 c3t17d0 - - 0 0 0 0 ----------- ----- ----- ----- ----- ----- ----- -----8<----- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080808/7350c87b/attachment.bin>