Chris Quenelle
2007-Jun-19 22:42 UTC
[zfs-discuss] Best practice for moving FS between pool on same machine?
What is the best (meaning fastest) way to move a large file system from one pool to another pool on the same machine. I have a machine with two pools. One pool currently has all my data (4 filesystems), but it''s misconfigured. Another pool is configured correctly, and I want to move the file systems to the new pool. Should I use ''rsync'' or ''zfs send''? What happens is I forgot I couldn''t incrementally add raid devices. I want to end up with two raidz(x4) vdevs in the same pool. Here''s what I have now: B# zpool status pool: dbxpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM dbxpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c2t6d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 errors: No known data errors pool: dbxpool2 state: ONLINE scrub: resilver completed with 0 errors on Tue Jun 19 15:16:19 2007 config: NAME STATE READ WRITE CKSUM dbxpool2 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c2t5d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors ------------------- ''dbxpool'' has all my data today. Here are my steps: 1. move data to dbxpool2 2. remount using dbxpool2 3. destroy dbxpool1 4. create new proper raidz vdev inside dbxpool2 using devices from dbxpool1 Any advice? I''m constrained by trying to minimize the downtime for the group of people using this as their file server. So I ended up with an ad-hoc assignment of devices. I''m not worried about optimizing my controller traffic at the moment. This message posted from opensolaris.org
Constantin Gonzalez
2007-Jun-20 07:55 UTC
[zfs-discuss] Best practice for moving FS between pool on same machine?
Hi Chris,> What is the best (meaning fastest) way to move a large file system > from one pool to another pool on the same machine. I have a machine > with two pools. One pool currently has all my data (4 filesystems), but it''s > misconfigured. Another pool is configured correctly, and I want to move the > file systems to the new pool. Should I use ''rsync'' or ''zfs send''?zfs send/receive is the fastest and most efficient way. I''ve used it multiple times on my home server until I had my configuration right :).> What happens is I forgot I couldn''t incrementally add raid devices. I want > to end up with two raidz(x4) vdevs in the same pool. Here''s what I have now:For this reason, I decided to go with mirrors. Yes, they use more raw storage space, but they are also much more flexible to expand. Just add two disks when the pool is full and you''re done. If you have a lot of disks or can afford to add disks 4-5 disks at a time, then RAID-Z may be as easy to do, but remember that two disk failures in RAID-5 variants can be quite common - You may want RAID-Z2 instead.> 1. move data to dbxpool2 > 2. remount using dbxpool2 > 3. destroy dbxpool1 > 4. create new proper raidz vdev inside dbxpool2 using devices from dbxpool1Add: 0. Snapshot data in dbxpool1 so you can use zfs send/receive Then the above should work fine.> I''m constrained by trying to minimize the downtime for the group > of people using this as their file server. So I ended up with > an ad-hoc assignment of devices. I''m not worried about > optimizing my controller traffic at the moment.Ok. If you want to really be thorough, I''d recommend: 0. Run a backup, just in case. It never hurts. 1. Do a snapshot of dbxpool1 2. zfs send/receive dbxpool1 -> dbxpool2 (This happens while users are still using dbxpool1, so no downtime). 3. Unmount dbxpool1 4. Do a second snapshot of dbxpool1 5. Do an incremental zfs send/receive of dbxpool1 -> dbxpool2. (This should take only a small amount of time) 6. Mount dbxpool2 where dbxpool1 used to be. 7. Check everything is fine with the new mounted pool. 8. Destroy dbxpool1 9. Use disks from dbxpool1 to expand dbxpool2 (be careful :) ). You might want to exercise the above steps on an extra spare disk with two pools just to gain some confidence before doing it in production. I have a script that automatically does 1-6 that is looking for beta testers. If you''re interested, let me know. Hope this helps, Constantin -- Constantin Gonzalez Sun Microsystems GmbH, Germany Platform Technology Group, Global Systems Engineering http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering
Chris Quenelle
2007-Jun-20 22:39 UTC
[zfs-discuss] Re: Best practice for moving FS between pool on same machine?
Thanks, Constantin! That sounds like the right answer for me. Can I use send and/or snapshot at the pool level? Or do I have to use it on one filesystem at a time? I couldn''t quite figure this out from the man pages. --chris This message posted from opensolaris.org
Constantin Gonzalez
2007-Jun-21 07:53 UTC
[zfs-discuss] Re: Best practice for moving FS between pool on same machine?
Hi, Chris Quenelle wrote:> Thanks, Constantin! That sounds like the right answer for me. > Can I use send and/or snapshot at the pool level? Or do I have > to use it on one filesystem at a time? I couldn''t quite figure this > out from the man pages.the ZFS team is working on a zfs send -r (recursive) option to be able to recursively send and receive hierarchies of ZFS filesystems in one go, including pools. So you''ll need to do it one filesystem at a time. This is not always trivial: If you send a full snapshot, then an incremental one and the target filesystem is mounted, you''ll likely get an error that the target filesystem was modified. Make sure the target filesystems are unmounted and ideally marked as unmountable while performing the send/receives. Also, you may want to use the -F option to receive which forces a rollback of the target filesystem to the most recent snapshot. I''ve written a script to do all of this, but it''s only "works on my system" certified. I''d like to get some feedback and validation before I post it on my blog, so anyone, let me know if you want to try it out. Best regards, Constantin -- Constantin Gonzalez Sun Microsystems GmbH, Germany Platform Technology Group, Global Systems Engineering http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering
Chris Quenelle
2007-Jun-21 18:00 UTC
[zfs-discuss] Re: Best practice for moving FS between pool on same machine?
Sorry I can''t volunteer to test your script. I want to do the steps by hand to make sure I understand them. If I have to do it all again, I''ll get in touch. Thanks for the advice! --chris Constantin Gonzalez wrote:> Hi, > > Chris Quenelle wrote: >> Thanks, Constantin! That sounds like the right answer for me. >> Can I use send and/or snapshot at the pool level? Or do I have >> to use it on one filesystem at a time? I couldn''t quite figure this >> out from the man pages. > > the ZFS team is working on a zfs send -r (recursive) option to be able > to recursively send and receive hierarchies of ZFS filesystems in one go, > including pools. > > So you''ll need to do it one filesystem at a time. > > This is not always trivial: If you send a full snapshot, then an incremental > one and the target filesystem is mounted, you''ll likely get an error that the > target filesystem was modified. Make sure the target filesystems are unmounted > and ideally marked as unmountable while performing the send/receives. Also, > you may want to use the -F option to receive which forces a rollback of the > target filesystem to the most recent snapshot. > > I''ve written a script to do all of this, but it''s only "works on my system" > certified. > > I''d like to get some feedback and validation before I post it on my blog, > so anyone, let me know if you want to try it out. > > Best regards, > Constantin >