Hi Alex,
Disks that are part of the root pool must contain a valid
slice 0 (this is boot restriction) and the disk names that you
present to ZFS for the root pool must also specify the slice
identifier (s0). For example, instead of this syntax:
# zpool attach -f rpool c0t0d0 c0t2d0
try this syntax:
# zpool attach rpool c0t0d0s0 c0t2d0s0
Then, apply the boot blocks to c0t2d0s0.
This issue was a bug in previous releases:
# zpool attach -f rpool c0t0d0 c0t2d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).
To workaround this bug, try this syntax:
# zpool attach -f rpool c0t0d0s0 c0t2d0s0
Thanks,
Cindy
On 01/27/11 19:18, alex bartonek wrote:> (for some reason I cannot find my original thread..so I''m
reposting it)
>
> I am trying to move my data off of a 40gb 3.5" drive to a 40gb
2.5" drive. This is in a Netra running Solaris 10.
>
> Originally what I did was:
>
> zpool attach -f rpool c0t0d0 c0t2d0.
>
> Then I did an installboot on c0t2d0s0.
>
> Didnt work. I was not able to boot from my second drive (c0t2d0).
>
> I cannot remember my other commands but I ended up removing c0t2d0 from my
pool. So here is how it looks now:
>
> # zpool status -v
> pool: rpool
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> c0t0d0s0 ONLINE 0 0 0
>
> zfs list shows no other drive connected to the pool.
>
> I am trying to redo this to see where I went wrong but I get the following
error:
> zpool attach -f rpool c0t0d0 c0t2d0
>
>
> # zpool attach -f rpool c0t0d0 c0t2d0
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
> /dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).
>
>
> How can I remove c0t2d0 from the pool?