I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS doesn''t currently let me detach a non-mirrored device. Is this planned for the future at all? I would imagine something like this: zpool detach --non-mirrored dev ... detaching non-mirrored dev... wait for data to be copied or even zpool detach --non-mirrored dev ... detaching non-mirrored dev... there''s not sufficient space to be able to remove dev etc. I hope my explanation is clear in that obviously the data would have to be copied, possibly to the new drive I''ve added, as I want to remove the old one. -- This message posted from opensolaris.org
On Tue, Dec 16, 2008 at 7:10 PM, Daniel <nefar at hotmail.com> wrote:> I''m using zfs not to have access to a fail-safe backed up system, but to > easily manage my file system. I would like to be able to, as I buy new > harddrives, just to be able to replace the old ones. I''m very > environmentally concious, so I don''t want to leave old drives in there to > consume power as they''ve already been replaced by larger ones. However, ZFS > doesn''t currently let me detach a non-mirrored device. Is this planned for > the future at all? I would imagine something like this: > > zpool detach --non-mirrored dev ... > detaching non-mirrored dev... wait for data to be copied > > or even > > zpool detach --non-mirrored dev ... > detaching non-mirrored dev... there''s not sufficient space to be able to > remove dev > > etc. > > I hope my explanation is clear in that obviously the data would have to be > copied, possibly to the new drive I''ve added, as I want to remove the old > one. >vdev evacuation has been talked about, but there''s still no plans for anyone from Sun to implement it that I''m aware of as none of their large enterprise customers have requested it. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081216/375e0d59/attachment.html>
tcook, Thanks for your response. Well, I don''t imagine there would be a lot of requests from enterprise customers with deep pockets. My impression has been that OS is targetting the little guy though, and as such, this would really be a welcome feature. -- This message posted from opensolaris.org
Err... you can''t remove drives that are in use, but for what you''re describing can''t you just use zfs replace and then remove the old drive? -- This message posted from opensolaris.org
Casper.Dik at Sun.COM
2008-Dec-17 10:07 UTC
[zfs-discuss] zpool detach on non-mirrored drive
>I''m using zfs not to have access to a fail-safe backed up system, but to easily >manage my file system. I would like to be able to, as I buy new harddrives, just >to be able to replace the old ones. I''m very environmentally concious, so I >don''t want to leave old drives in there to consume power as they''ve already been >replaced by larger ones. However, ZFS doesn''t currently let me detach a non- >mirrored device. Is this planned for the future at all? I would imagine >something like this: > >zpool detach --non-mirrored dev ... > detaching non-mirrored dev... wait for data to be copied > >or even > >zpool detach --non-mirrored dev ... > detaching non-mirrored dev... there''s not sufficient space to be able to remove dev >It is unfortunately that you ask this question after you''ve installed the new disks; now both the old and the new disks are part of the same zpool. zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device. The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration. new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a dif- ferent disk. ZFS recognizes this. So in order to replace an old disk, you''d use: zpool replace pool olddisk newdisk Casper
> It is unfortunately that you ask this question after > you''ve installed the > new disks; now both the old and the new disks are > part of the same zpool.That''s awesome, I did not know that this would work. I''m glad I made this post. I actually have not yet replaced any drive, in fact, this very issue had made me not span multiple drive. I installed zfs to deal with an outdated partition issue, so I only have had the pool span partitions until now. But now that I know I can use this as a solution, I will be less hesitant to add new drives :) -- This message posted from opensolaris.org
Is it possible to do a replace on the root filesystem as well? -- This message posted from opensolaris.org
Is it possible to do a replace on / as well? -- This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2008-Dec-18 17:07 UTC
[zfs-discuss] zpool detach on non-mirrored drive
Daniel, You can replace the disks in both of the supported root pool configurations: - single disk (non-redundant) root pool - mirrored (redundant) root pool I''ve tried both recently and I prefer attaching the replacement disk to the single-disk root pool and then detaching the old disk, using the steps below. I like to provide confirmation between various steps. The steps are included for replacing the root pool disk with zpool replace, which works fine too, but doesn''t provide the safety nets that the first approach does. Cindy Before considering either approach, make sure you understand the boot device pathnames of the current and new disk. Replace root pool disk with zpool attach 1. Attach the new disk: # zpool attach rpool old-disk new-disk This step creates a mirrored root pool. 2. After the new disk is resilvered, apply the boot blocks to the new disk, like this: # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk new-disk 3. Verify that you can boot from the new disk. 4. If the system boots from the new disk, detach the old disk: # zpool detach rpool old-disk 5. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv from the SPARC boot prom, or configure the PC BIOS. Replace root pool disk with zpool replace 1. Replace the root pool disk: # zpool replace rpool c1t10d0s0 c4t0d0s0 2. Check the status of the disk replacement: # zpool replace rpool c1t10d0s0 c4t0d0s0 # zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 4.73% done, 0h2m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 replacing ONLINE 0 0 0 c1t10d0s0 ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 3. When the replacement and resilvering are complete, add the boot blocks to the new disk: # zpool status pool: rpool state: ONLINE scrub: resilver completed after 0h4m with 0 errors on Fri Dec 12 10:32:40 2008 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c4t0d0s0 ONLINE 0 0 0 # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk c4t0d0s0 4. Confirm that you boot from the new disk. 5. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv from the SPARC boot prom, or configure the x86 BIOS. Daniel wrote:> Is it possible to do a replace on / as well?
Cindy, This is helpful! Thank you very much :) -- This message posted from opensolaris.org
I have only one question: why? ZFS is a great filesystem, but, why we can''t shrink a zpool? This is a very important feature to ZFS adoption in enterprises, or not? Is it at least a planned feature? Thanks, Joel On Thu, Dec 18, 2008 at 3:45 PM, Daniel <nefar at hotmail.com> wrote:> Cindy, > > This is helpful! Thank you very much :) > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Joel Cesar Zamboni joel.zamboni at gmail.com
Actually, I think it''s smaller companies and home users who want this most. Sun target enterprises with their storage kit which tends to be servers that are fully populated with drives. Look at the thumper for example - a 4U server that already comes with 48 drives. You''re not going to be adding or removing drives from that, it''s pretty much a drop in storage box. There has been a lot of talk over the last year about making it possible to remove drives, and I believe Sun are working on the code to do this. It sounds like it''s quite a complicated thing to do however, so it''s likely to be a good while before it happens. For now it just needs to be accepted as one of the limitations of ZFS. It''s easy enough to work around when you know about it. -- This message posted from opensolaris.org