(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember my other commands but I ended up removing c0t2d0 from my pool. So here is how it looks now: # zpool status -v pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0t0d0s0 ONLINE 0 0 0 zfs list shows no other drive connected to the pool. I am trying to redo this to see where I went wrong but I get the following error: zpool attach -f rpool c0t0d0 c0t2d0 # zpool attach -f rpool c0t0d0 c0t2d0 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M). /dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M). How can I remove c0t2d0 from the pool? -- This message posted from opensolaris.org
Hi Alex, Disks that are part of the root pool must contain a valid slice 0 (this is boot restriction) and the disk names that you present to ZFS for the root pool must also specify the slice identifier (s0). For example, instead of this syntax: # zpool attach -f rpool c0t0d0 c0t2d0 try this syntax: # zpool attach rpool c0t0d0s0 c0t2d0s0 Then, apply the boot blocks to c0t2d0s0. This issue was a bug in previous releases: # zpool attach -f rpool c0t0d0 c0t2d0 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M). /dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M). To workaround this bug, try this syntax: # zpool attach -f rpool c0t0d0s0 c0t2d0s0 Thanks, Cindy On 01/27/11 19:18, alex bartonek wrote:> (for some reason I cannot find my original thread..so I''m reposting it) > > I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. > > Originally what I did was: > > zpool attach -f rpool c0t0d0 c0t2d0. > > Then I did an installboot on c0t2d0s0. > > Didnt work. I was not able to boot from my second drive (c0t2d0). > > I cannot remember my other commands but I ended up removing c0t2d0 from my pool. So here is how it looks now: > > # zpool status -v > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c0t0d0s0 ONLINE 0 0 0 > > zfs list shows no other drive connected to the pool. > > I am trying to redo this to see where I went wrong but I get the following error: > zpool attach -f rpool c0t0d0 c0t2d0 > > > # zpool attach -f rpool c0t0d0 c0t2d0 > invalid vdev specification > the following errors must be manually repaired: > /dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M). > /dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M). > > > How can I remove c0t2d0 from the pool?
Hey Cindy... wanted to post up on here since you''ve been helping me in email (which I greatly appreciate!). I figured it out.. I''ve done the ''dd'' thing before etc. I got it all the way to where it was complaining that it cannot use a EFI labeled drive. When I did a prtvtoc | fmthard on the drive, I was never able to change it to a SMI label. So I went in there, changed the cylinder info, relabeled, changed it back, label..and voila..now I can mirror again!! Thank you for taking the time to personally email me with my issue. -Alex -- This message posted from opensolaris.org
Possibly Parallel Threads
- Using zfs boot with MPxIO on T2000
- Q: T2000: raidctl vs. zpool status
- bug :zpool create allow member driver as the raw drive of full partition
- expand zfs for OpenSolaris running inside vm
- zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0