Hi,
to shorten the story, I describe the situation. I have 4 disks in a zfs/svm
config:
c2t9d0 9G
c2t10d0 9G
c2t11d0 18G
c2t12d0 18G
c2t11d0 is devided in two:
selecting c2t11d0
[disk formatted]
/dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M).
/dev/dsk/c2t11d0s2 is in use by zpool storedge. Please see zpool(1M).
/dev/dsk/c2t11d0s7 contains an SVM mdb. Please see metadb(1M).
format> partition
partition> print
Current partition table (original):
Total disk cylinders available: 7506 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 home wm 0 - 3783 8.50GB (3784/0/0) 17830208
1 home wm 3784 - 7499 8.35GB (3716/0/0) 17509792
2 backup wu 0 - 7505 16.86GB (7506/0/0) 35368272
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 alternates wm 7500 - 7505 13.80MB (6/0/0) 28272
This zfs/svm config was setup on a controller, after this the external storedge
was attached to a new controller. All things ran fine, until I decided to
export/import the zpool to get correct names in the zpool config. After this
zpool decided to use c2t11d0s2 instead of c2t11d0s1! s2 is the overlap backup
space and on c2t11d0s0 is a mirror of my /export/home!!!!
I exported the zpool now to make sure I don''t get any dataloss. After
export/import I did not write to the zpool, so I hope my svm /export/home is
still clean, but I am not sure.
I did not detect this issue at once. I was experimenting with changing devices
in a zpool with different size and rereading the manpages in between. So I
don''t have the complete output. Maybe anyone can make something of this
and tell me how I can make sure my /export/home mirror is in the best possible
state.
Here comes the (partially incomplete) output of what I had, have done, and have
now:
# cfgadm -la c2
Ap_Id Type Receptacle Occupant Condition
c2 scsi-bus connected configured unknown
c2::dsk/c2t9d0 disk connected configured unknown
c2::dsk/c2t10d0 disk connected configured unknown
c2::dsk/c2t11d0 disk connected configured unknown
c2::dsk/c2t12d0 disk connected configured unknown
# zpool status
pool: storedge
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storedge ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t9d0s0 ONLINE 0 0 0
c4t10d0s0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t11d0s0 ONLINE 0 0 0
c4t12d0s0 ONLINE 0 0 0
# zpool detach storedge c4t10d0s0
# zpool status
pool: storedge
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storedge ONLINE 0 0 0
c4t9d0s0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t11d0s0 ONLINE 0 0 0
c4t12d0s0 ONLINE 0 0 0
# zpool detach storedge c4t12d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
storedge 16,8G 889M 15,9G 5% ONLINE -
# zpool status
pool: storedge
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storedge ONLINE 0 0 0
c4t9d0s0 ONLINE 0 0 0
c4t11d0s0 ONLINE 0 0 0
# zpool attach storedge c4t9d0s0 c2t12d0
# zpool status
pool: storedge
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 5,39% done, 0h1m to go
config:
NAME STATE READ WRITE CKSUM
storedge ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t9d0s0 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0 16,2M resilvered
c4t11d0s0 ONLINE 0 0 0
# cfgadm -x remove_device c2::dsk/c2t10d0
SCSI-Ger?t entfernen: /devices/pci at 1d,700000/scsi at 4,1/sd at a,0
Diese Operation deaktiviert den SCSI-Bus: c2
Weiter (ja/nein)? ja
SCSI-Bus wurde erfolgreich deaktiviert.
Der Hotplug-Vorgang kann jetzt fortgesetzt werden
Geben Sie J ein, wenn der Vorgang abgeschlossen ist, oder N, um abzubrechen (ja/
nein)? ja
# zpool export storedge
# zpool import
pool: storedge
id: 12721592472244284268
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
storedge ONLINE
mirror ONLINE
c2t9d0s2 ONLINE
c2t12d0s0 ONLINE
c2t11d0s2 ONLINE
# zpool import storedge
(don''t have the output anymore)
# zpool status -v storedge
pool: storedge
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
storedge ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t9d0s2 ONLINE 0 0 0
c2t12d0s0 ONLINE 0 0 0
c2t11d0s2 ONLINE 0 0 0
This message posted from opensolaris.org