How can I remove a device or a partition from a pool. NOTE: The devices are not mirrored or raidz Thanks This message posted from opensolaris.org
On 13/07/06, Yacov Ben-Moshe <Yacov.Ben-Moshe at bcx.co.za> wrote:> How can I remove a device or a partition from a pool. > NOTE: The devices are not mirrored or raidzThen you can''t - there isn''t a ''zfs remove'' command yet. -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
"Dick Davies" <rasputnik at gmail.com> writes:> On 13/07/06, Yacov Ben-Moshe <Yacov.Ben-Moshe at bcx.co.za> wrote: > > How can I remove a device or a partition from a pool. > > NOTE: The devices are not mirrored or raidz > > Then you can''t - there isn''t a ''zfs remove'' command yet.Yeah, I ran into that in my testing, too. I suspect it''s something that will come up in testing a LOT more than in real production use. Although accidentally adding a device to the wrong thing is an unfixable error at the moment, which is not good. -- David Dyer-Bennet, <mailto:dd-b at dd-b.net>, <http://www.dd-b.net/dd-b/> RKBA: <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
> Yeah, I ran into that in my testing, too. I suspect > it''s something > that will come up in testing a LOT more than in real > production use.I disagree. I can see lots of situations where you want to attach new storage and remove or retire old storage from an existing pool. It would be great if ZFS could accept a "remove <vdev>" command, migrate any existing data off that vdev onto the rest of the pool, and remove the vdev from the pool completely. Imagine an aging disk shelf that you''re using as your zpool... it''s about to croak, so you buy a replacement, add it to the system, and add the new shelf to the pool. If you can simply remove the old shelf''s vdev, you have practically no downtime. (Or truly no downtime, if you can hot-attach the new storage) If you must freeze the filesystems, dump/restore it to a separate location, and move the new pool to the old location, it''s a significantly more disruptive event. This would also allow you to fix mistakes like creating a vdev with the wrong devices. In my case, I made a typo and created a raidz with mixed disk sizes instead of using all the same size disks... I''m now stuck with using half the capacity of the disks in that vdev unless I completely destroy the pool, and maybe I have people ticked off at me because I''ve just wasted half their money and I can''t afford to blow away the zpool and start over again. You *can* replace individual devices in the vdev, but I haven''t tested whether or not the raidz grows to use the full size of the disks once I replace all the 36 GB drives with 73 GB drives. I suspect not. Fortunately, this is not a production system, so I can nuke it, but the flexibility to remove vdevs without nuking the whole zpool would be a Very Good Thing. BP This message posted from opensolaris.org
On Fri, Aug 04, 2006 at 10:29:49AM -0700, Brad Plecs wrote:> > You *can* replace individual devices in the vdev, but I haven''t tested whether or > not the raidz grows to use the full size of the disks once I replace all the 36 GB drives > with 73 GB drives. I suspect not.Actually, it does. I tested this (with files, not disks, but it shouldn''t be any different). As long as make sure all the disks in the whatever you would call it, container? (as in you have more than one raidz container in a pool) the size will expand. I think you may need to force a scrub for it to see things, but it does work. -brian
Hi there Are there any consideration given to this feature...? I would also agree that this will not only be a "testing" feature, but will find it''s way into production. It would probably work on the same princaple of swap -a and swap -d ;) Just a little bit more complex. This message posted from opensolaris.org
On Fri, Aug 11, 2006 at 02:47:19AM -0700, Louwtjie Burger wrote:> Are there any consideration given to this feature...?Yes, this is on our radar. We have some ideas about how to implement it, but it will probably be at least 6 months until it is ready. We have several higher-priority tasks to finish before then (eg. continuing to improve performance, boot and install off zfs).> It would probably work on the same princaple of swap -a and swap -d ;) > Just a little bit more complex.Just a bit ;-) --matt