[b]Short Version[/b] I used zpool add instead of zpool replace while trying to move drives from an si3124 controller card. I can backup the data to other drives and destroy the pool, but would prefer not to since it involved around 4 tb of data and will take forever. [b]zpool add mypool c4t2d0[/b] instead of [b]zpool replace mypool c2t1d0 c4t2d0[/b] no data has been written to the array since I did this. [b]Long Version[/b] I recently setup an OpenSolaris server as a file server in my home. This is a replacement for my previous system, a windows box, that had a problematic Perc 5 / raid 5 card. The last time the raid array on my windows server went offline (The device returned a Code 10 Error under the windows driver), I broke down and bought the components for its replacement. Unfortunately, the replacement system I purchased included an Si3124 card, which I did not realize has a driver issue under solaris. I had another raid card that works under solaris, but does not support JBOD, so I started creating raid 0 drives and moving them over one disk at a time, but I screwed up with the second drive and issued the following command [b]zpool add mypool c4t0d2[/b] I should have done a replace (what I did last time). Unfortunately, I cannot figure out how to remove the drive from the pool now. My pool has 8 2tb drives in raidz2. zpool status returned mypool DEGRADED 0 0 0 - raidz2-0 DEGRADED 0 0 0 - - c2t0d0 ONLINE 0 0 0 - - c2t1d0 UNAVAIL 0 0 0 cannot open - - c4t0d0 ONLINE 0 0 0 - - c2t3d0 ONLINE 0 0 0 - - c3t0d0 ONLINE 0 0 0 - - c3t1d0 ONLINE 0 0 0 - - c3t3d0 ONLINE 0 0 0 - - c3t3d0 ONLINE 0 0 0 - c4t2d0 ONLINE 0 0 0 Am I screwed and needing to destroy and recreate it? While not the end of the world, it would take a fair amount of time, and I would rather not. This server mainly hosts the video files for a media center PC. The si3124 pauses every 5 or 10 seconds, which reduces throughput from 40-50 MB/s to 2-3MB/s and interrupts the data stream long enough to make videos unplayable. -- This message posted from opensolaris.org
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bear > > [b]Short Version[/b] > I used zpool add instead of zpool replace while trying to move drives > from an si3124 controller card. I can backup the data to other drives > and destroy the pool, but would prefer not to since it involved around > 4 tb of data and will take forever. > [b]zpool add mypool c4t2d0[/b] > instead of > [b]zpool replace mypool c2t1d0 c4t2d0[/b]Yeah ... Unfortunately, you cannot remove a vdev from a pool once it''s been added. So ... Temporarily, in order to get c4t2d0 back into your control for other purposes, you could create a sparse file somewhere, and replace this device with the sparse file. This should be very fast, and should not hurt performance, as long as you haven''t written any significant amount of data to the pool since adding that device, and won''t be writing anything significant until after all is said and done. Don''t create the sparse file inside the pool. Create the sparsefile somewhere in rpool, so you don''t have a gridlock mount order problem. Rather than replacing each device one-by-one, I might suggest creating a new raidz2 on the new hardware, and then use "zfs send | zfs receive" to replicate the contents of the first raid set to the 2nd raid set... Then, just destroy (or export, or unmount) the first raid set, while changing the mountpoint of the 2nd raid set. (And export/import or unmount/mount.) since you have data that''s mostly not changing, the send/receive method should be extremely efficient. You do one send/receive, and you don''t even have to follow up with any incrementals later...