Hello folks, I have a question. Currently I have zfs pool (mirror) on two internal disks... I wanted to connect that server to SAN, then add more storage to this pool (double the space) then start using it. Then what I wanted to do is just take out the internal disks out of that pool and use SAN only. Is there any way to do that with zfs pools? Is there any way to move data from those internal disks to external disks? I mean there are ways around it, I know I can make new pool, create snap on old and then send it over then when I am done just bring zone down make incremental sync and then switch that zone to use new pool, but I wanted to do it while I have everything up.. so my goal was to add another disk (SAN) disk to my existing two disks mirrored pool, then move data while I have everything running from one internal disks to SAN and then just take those internal disks out... Any comments or suggestions greatly appreciated. Regards, Chris
On 5/29/07, Krzys <krzys at perfekt.net> wrote:> Hello folks, I have a question. Currently I have zfs pool (mirror) on two > internal disks... I wanted to connect that server to SAN, then add more storage > to this pool (double the space) then start using it. Then what I wanted to do is > just take out the internal disks out of that pool and use SAN only. Is there any > way to do that with zfs pools? Is there any way to move data from those internal > disks to external disks?You can "zpool replace" your disks with other disks. Provided that you have same amount of new disks and they are of same or greater size -- Regards, Cyril
Perfect, i will try to play with that... Regards, Chris On Tue, 29 May 2007, Cyril Plisko wrote:> On 5/29/07, Krzys <krzys at perfekt.net> wrote: >> Hello folks, I have a question. Currently I have zfs pool (mirror) on two >> internal disks... I wanted to connect that server to SAN, then add more >> storage >> to this pool (double the space) then start using it. Then what I wanted to >> do is >> just take out the internal disks out of that pool and use SAN only. Is >> there any >> way to do that with zfs pools? Is there any way to move data from those >> internal >> disks to external disks? > > You can "zpool replace" your disks with other disks. Provided that you have > same amount of new disks and they are of same or greater size > > > -- > Regards, > Cyril > > > !DSPAM:122,465c515921755021468! >
Sorry to bother you but something is not clear to me regarding this process.. Ok, lets sat I have two internal disks (73gb each) and I am mirror them... now I want to replace those two mirrored disks into one LUN that is on SAN and it is around 100gb. Now I do meet one requirement of having more than 73gb of storage but do I need only something like 73gb at minimum or do I actually need two luns of 73gb or more since I have it mirrored? My goal is simple to move data of two mirrored disks into one single SAN device... Any ideas if what I am planning to do is duable? or do I need to use zfs send and receive and just update everything and switch when I am done? or do I just add this SAN disk to the existing pool and then remove mirror somehow? I would just have to make sure that all data is off that disk... is there any option to evacuate data off that mirror? here is what I exactly have: bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 68G 52.9G 15.1G 77% ONLINE - bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# On Tue, 29 May 2007, Cyril Plisko wrote:> On 5/29/07, Krzys <krzys at perfekt.net> wrote: >> Hello folks, I have a question. Currently I have zfs pool (mirror) on two >> internal disks... I wanted to connect that server to SAN, then add more >> storage >> to this pool (double the space) then start using it. Then what I wanted to >> do is >> just take out the internal disks out of that pool and use SAN only. Is >> there any >> way to do that with zfs pools? Is there any way to move data from those >> internal >> disks to external disks? > > You can "zpool replace" your disks with other disks. Provided that you have > same amount of new disks and they are of same or greater size > > > -- > Regards, > Cyril > > > !DSPAM:122,465c515921755021468! >
Krzys wrote:> Sorry to bother you but something is not clear to me regarding this > process.. Ok, lets sat I have two internal disks (73gb each) and I am > mirror them... now I want to replace those two mirrored disks into one > LUN that is on SAN and it is around 100gb. Now I do meet one requirement > of having more than 73gb of storage but do I need only something like > 73gb at minimum or do I actually need two luns of 73gb or more since I > have it mirrored?You can attach any number of devices to a mirror. You can detach all but one of the devices from a mirror. Obviously, when the number is one, you don''t currently have a mirror. The resulting logical size will be equivalent to the smallest device.> My goal is simple to move data of two mirrored disks into one single SAN > device... Any ideas if what I am planning to do is duable? or do I need > to use zfs send and receive and just update everything and switch when I > am done? > > or do I just add this SAN disk to the existing pool and then remove > mirror somehow? I would just have to make sure that all data is off that > disk... is there any option to evacuate data off that mirror?The ZFS terminology is "attach" and "detach" A "replace" is an attach followed by detach. It is a good idea to verify that the sync has completed before detaching. zpool status will show the current status. -- richard
Hmm, I am having some problems, I did follow what you suggested and here is what I did: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# zpool detach mypool c1t3d0 bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors so now I have only one disk in my pool... Now, the c1t2d0 disk is a 72fb SAS drive. I am trying to replace it with SAN 100GB LUN (emcpower0a) bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1e,600000/pci at 0/pci at a/pci at 0/pci at 8/scsi at 1/sd at 0,0 1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci at 1e,600000/pci at 0/pci at a/pci at 0/pci at 8/scsi at 1/sd at 1,0 2. c1t2d0 <SEAGATE-ST973401LSUN72G-0556-68.37GB> /pci at 1e,600000/pci at 0/pci at a/pci at 0/pci at 8/scsi at 1/sd at 2,0 3. c1t3d0 <FUJITSU-MAY2073RCSUN72G-0501-68.37GB> /pci at 1e,600000/pci at 0/pci at a/pci at 0/pci at 8/scsi at 1/sd at 3,0 4. c2t5006016041E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> /pci at 1f,700000/pci at 0/SUNW,qlc at 2/fp at 0,0/ssd at w5006016041e035a4,0 5. c2t5006016941E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> /pci at 1f,700000/pci at 0/SUNW,qlc at 2/fp at 0,0/ssd at w5006016941e035a4,0 6. c3t5006016841E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> /pci at 1f,700000/pci at 0,2/SUNW,qlc at 1/fp at 0,0/ssd at w5006016841e035a4,0 7. c3t5006016141E035A4d0 <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> /pci at 1f,700000/pci at 0,2/SUNW,qlc at 1/fp at 0,0/ssd at w5006016141e035a4,0 8. emcpower0a <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> /pseudo/emcp at 0 Specify disk (enter its number): ^D so I do run replace command and I get and error: bash-3.00# zpool replace mypool c1t2d0 emcpower0a cannot replace c1t2d0 with emcpower0a: device is too small Any idea what I am doing wrong? Why it thinks that emcpower0a is too small? Regards, Chris On Thu, 31 May 2007, Richard Elling wrote:> Krzys wrote: >> Sorry to bother you but something is not clear to me regarding this >> process.. Ok, lets sat I have two internal disks (73gb each) and I am >> mirror them... now I want to replace those two mirrored disks into one LUN >> that is on SAN and it is around 100gb. Now I do meet one requirement of >> having more than 73gb of storage but do I need only something like 73gb at >> minimum or do I actually need two luns of 73gb or more since I have it >> mirrored? > > You can attach any number of devices to a mirror. > > You can detach all but one of the devices from a mirror. Obviously, when > the number is one, you don''t currently have a mirror. > > The resulting logical size will be equivalent to the smallest device. > >> My goal is simple to move data of two mirrored disks into one single SAN >> device... Any ideas if what I am planning to do is duable? or do I need to >> use zfs send and receive and just update everything and switch when I am >> done? >> >> or do I just add this SAN disk to the existing pool and then remove mirror >> somehow? I would just have to make sure that all data is off that disk... >> is there any option to evacuate data off that mirror? > > The ZFS terminology is "attach" and "detach" A "replace" is an attach > followed by detach. > > It is a good idea to verify that the sync has completed before detaching. > zpool status will show the current status. > -- richard > > > !DSPAM:122,465f396b235932151120594! >
On 5/31/07, Krzys <krzys at perfekt.net> wrote:> so I do run replace command and I get and error: > bash-3.00# zpool replace mypool c1t2d0 emcpower0a > cannot replace c1t2d0 with emcpower0a: device is too smallTry "zpool attach mypool emcpower0a"; see http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view . Will
Yes by my goal is to replace exisiting disk which is internal disk 72gb with SAN storage disk which is 100GB in size... As long as I will be able to detach the old one then its going to be great... otherwise I will be stuck with one internal disk and oneSAN disk which I do not like that much to have. Regards, Chris On Fri, 1 Jun 2007, Will Murnane wrote:> On 5/31/07, Krzys <krzys at perfekt.net> wrote: >> so I do run replace command and I get and error: >> bash-3.00# zpool replace mypool c1t2d0 emcpower0a >> cannot replace c1t2d0 with emcpower0a: device is too small > Try "zpool attach mypool emcpower0a"; see > http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view . > > Will > > > !DSPAM:122,465fa1d813332148481500! >
Never the less I get the following error: bash-3.00# zpool attach mypool emcpower0a missing <new_device> specification usage: attach [-f] <pool> <device> <new_device> bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# zpool attach mypool c1t2d0 emcpower0a cannot attach emcpower0a to c1t2d0: device is too small bash-3.00# Is there anyway to add that emc san to zfs at all? It seems like that emcpower0a cannot be added in any way... but check this out, I did try to add it in as a new pool and here is what I got: bash-3.00# zpool create mypool2 emcpower0a bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors pool: mypool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool2 ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 errors: No known data errors bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 68G 53.1G 14.9G 78% ONLINE - mypool2 123M 83.5K 123M 0% ONLINE - bash-3.00# On Fri, 1 Jun 2007, Will Murnane wrote:> On 5/31/07, Krzys <krzys at perfekt.net> wrote: >> so I do run replace command and I get and error: >> bash-3.00# zpool replace mypool c1t2d0 emcpower0a >> cannot replace c1t2d0 with emcpower0a: device is too small > Try "zpool attach mypool emcpower0a"; see > http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view . > > Will > > > !DSPAM:122,465fa1d813332148481500! >
On 6/1/07, Krzys <krzys at perfekt.net> wrote:> bash-3.00# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > mypool 68G 53.1G 14.9G 78% ONLINE - > mypool2 123M 83.5K 123M 0% ONLINE -Are you sure you''ve allocated as large a LUN as you thought initially? Perhaps ZFS is doing something funky with it; does putting UFS on it show a large filesystem or a small one? Will
ok, I think I did figure out what is the problem well what zpool does for that emc powerpath is it takes parition 0 from disk and is trying to attach it to my pool, so when I added emcpower0a I got the following: bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 68G 53.1G 14.9G 78% ONLINE - mypool2 123M 83.5K 123M 0% ONLINE - because my emcpower0a structure looked like this: format> verify Primary label contents: Volume name = < > ascii name = <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> pcyl = 51200 ncyl = 51198 acyl = 2 nhead = 256 nsect = 16 Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 63 128.00MB (64/0/0) 262144 1 swap wu 64 - 127 128.00MB (64/0/0) 262144 2 backup wu 0 - 51197 100.00GB (51198/0/0) 209707008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 128 - 51197 99.75GB (51070/0/0) 209182720 7 unassigned wm 0 0 (0/0/0) 0 So what I did I changed my layout to look like this: Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 51197 100.00GB (51198/0/0) 209707008 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 51197 100.00GB (51198/0/0) 209707008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 created new pool and I have the following: bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 68G 53.1G 14.9G 78% ONLINE - mypool2 99.5G 80K 99.5G 0% ONLINE - so now I will try to replace it... I guess zpool does treat differently devices and in particular the ones that are under emc powerpath controll which is using the first slice of that disk to create pool and not the whole device... Anyway thanks to everyone for help, now that replace should work... I am going to try it now. Chris On Fri, 1 Jun 2007, Will Murnane wrote:> On 5/31/07, Krzys <krzys at perfekt.net> wrote: >> so I do run replace command and I get and error: >> bash-3.00# zpool replace mypool c1t2d0 emcpower0a >> cannot replace c1t2d0 with emcpower0a: device is too small > Try "zpool attach mypool emcpower0a"; see > http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view . > > Will > > > !DSPAM:122,465fa1d813332148481500! >
yeah it does something funky that I did not expect, zpool seems like its taking slice 0 of that emc lun rather than taking the whole device... so when I did create that lun, I formated disk and it looked like this: format> verify Primary label contents: Volume name = < > ascii name = <DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16> pcyl = 51200 ncyl = 51198 acyl = 2 nhead = 256 nsect = 16 Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 63 128.00MB (64/0/0) 262144 1 swap wu 64 - 127 128.00MB (64/0/0) 262144 2 backup wu 0 - 51197 100.00GB (51198/0/0) 209707008 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 128 - 51197 99.75GB (51070/0/0) 209182720 7 unassigned wm 0 0 (0/0/0) 0 that is the reason when I was trying to replace the other disk zpool did take slice 0 of that disk which was 128mb and treated it as pool rather than taking the whole disk or slice 2 or whatever it does with normal devices... I have that system connected to EMC clarion and I am using powerpath software from emc to do multipathing and stuff... ehh.. will try to replace that device old internal disk with this one and lets see how that will work. thanks so much for help. Chris On Fri, 1 Jun 2007, Will Murnane wrote:> On 6/1/07, Krzys <krzys at perfekt.net> wrote: >> bash-3.00# zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> mypool 68G 53.1G 14.9G 78% ONLINE - >> mypool2 123M 83.5K 123M 0% ONLINE - > Are you sure you''ve allocated as large a LUN as you thought initially? > Perhaps ZFS is doing something funky with it; does putting UFS on it > show a large filesystem or a small one? > > Will > > > !DSPAM:122,46601749220211363223461! >
Ok, now its seems like its working what I wanted to do: bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# zpool detach mypool c1t3d0 bash-3.00# zpool status pool: mypool state: ONLINE scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007 config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# zpool replace mypool c1t2d0 emcpower0a bash-3.00# zpool status pool: mypool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 0.00% done, 17h50m to go config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 replacing ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 errors: No known data errors bash-3.00# thank you everyone who helped me with this... Chris On Fri, 1 Jun 2007, Will Murnane wrote:> On 6/1/07, Krzys <krzys at perfekt.net> wrote: >> bash-3.00# zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> mypool 68G 53.1G 14.9G 78% ONLINE - >> mypool2 123M 83.5K 123M 0% ONLINE - > Are you sure you''ve allocated as large a LUN as you thought initially? > Perhaps ZFS is doing something funky with it; does putting UFS on it > show a large filesystem or a small one? > > Will > > > !DSPAM:122,46601749220211363223461! >
On Fri, 1 Jun 2007, Krzys wrote:> bash-3.00# zpool replace mypool c1t2d0 emcpower0a > bash-3.00# zpool status > pool: mypool > state: ONLINE > status: One or more devices is currently being resilvered. The pool will > continue to function, possibly in a degraded state. > action: Wait for the resilver to complete. > scrub: resilver in progress, 0.00% done, 17h50m to go > config: > > NAME STATE READ WRITE CKSUM > mypool ONLINE 0 0 0 > replacing ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > emcpower0a ONLINE 0 0 0I don''t think this is what you want. Notice that it is in the process of replacing c1t2d0 with emcpower0a. Once the replacing operation is complete, c1t2d0 will be removed from the configuration. You''ve got two options. Let''s say your current mirror is c1t2d0 and c1t3d0, and you want to replace c1t3d0 with emcpower0a. Option one: perform a direct replace: # zpool replace mypool c1t3d0 emcpower0a Option two: remove c1t3d0 and add in emcpower0a: # zpool detach mypool c1t3d0 # zpool attach mypool c1t2d0 emcpower0a Do not mix these two options, as you showed in your email. Do not perform a ''detach'' followed by a ''replace''. This is mixing your options and you will end up with a config you were not expecting. Regards, markm
zpool replace == zpool attach + zpool detach It is not a good practice to detach and then attach as you are vulnerable after the detach and before the attach completes. It is a good practice to attach and then detach. There is no practical limit to the number of sides of a mirror in ZFS. -- richard