I''ve got a 6TB btrfs array (two 3TB drives in a RAID 0). It''s about 2/3 full and has lots of snapshots. I''ve written a script that runs through the snapshots and copies the data efficiently (rsync --inplace --no-whole-file) from the main 6TB array to a backup array, creating snapshots on the backup array and then continuing on copying the next snapshot. Problem is, it looks like it will take weeks to finish. I''ve tried simply using dd to clone the btrfs partition, which technically appears to work, but then it appears that the UUID between the arrays is identical, so I can only mount one or the other. This means I can''t continue to simply update the backup array with the new snapshots created on the main array (my script is capable of "catching up" the backup array with the new snapshots, but if I can''t mount both arrays...). Any suggestions? -BJ Quinn -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Dec 7, 2011 at 10:35 AM, BJ Quinn <bj@placs.net> wrote:> I''ve got a 6TB btrfs array (two 3TB drives in a RAID 0). It''s about 2/3 full and has lots of snapshots. I''ve written a script that runs through the snapshots and copies the data efficiently (rsync --inplace --no-whole-file) from the main 6TB array to a backup array, creating snapshots on the backup array and then continuing on copying the next snapshot. Problem is, it looks like it will take weeks to finish. > > I''ve tried simply using dd to clone the btrfs partition, which technically appears to work, but then it appears that the UUID between the arrays is identical, so I can only mount one or the other. This means I can''t continue to simply update the backup array with the new snapshots created on the main array (my script is capable of "catching up" the backup array with the new snapshots, but if I can''t mount both arrays...). > > Any suggestions?Until an analog of "zfs send" is added to btrfs (and I believe there are some side projects ongoing to add something similar), your only option is the one you are currently using via rsync. -- Freddie Cash fjwcash@gmail.com -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>Until an analog of "zfs send" is added to btrfs (and I believe there >are some side projects ongoing to add something similar), your only >option is the one you are currently using via rsync.Well, I don''t mind using the rsync script, it''s just that it''s so slow. I''d love to use my script to "keep up" the backup array, which only takes a couple of hours and is acceptable. But starting with a blank backup array, it takes weeks to get the backup array caught up, which isn''t realistically possible. What I need isn''t really an equivalent "zfs send" -- my script can do that. As I remember, zfs send was pretty slow too in a scenario like this. What I need is to be able to clone a btrfs array somehow -- dd would be nice, but as I said I end up with the identical UUID problem. Is there a way to change the UUID of an array? -BJ Quinn -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2011-12-07, 12:35(-06), BJ Quinn:> I''ve got a 6TB btrfs array (two 3TB drives in a RAID 0). It''s > about 2/3 full and has lots of snapshots. I''ve written a > script that runs through the snapshots and copies the data > efficiently (rsync --inplace --no-whole-file) from the main > 6TB array to a backup array, creating snapshots on the backup > array and then continuing on copying the next snapshot. > Problem is, it looks like it will take weeks to finish. > > I''ve tried simply using dd to clone the btrfs partition, which > technically appears to work, but then it appears that the UUID > between the arrays is identical, so I can only mount one or > the other. This means I can''t continue to simply update the > backup array with the new snapshots created on the main array > (my script is capable of "catching up" the backup array with > the new snapshots, but if I can''t mount both arrays...).[...] You can mount them if you specify the devices upon mount. Here''s a method to transfer a full FS to some other with different layout. In this example, we''re transfering from a FS on a 3GB device (/dev/loop1) to a new FS on 2 2GB devices (/dev/loop2, /dev/loop3) truncate -s 3G a1 truncate -s 2G b1 b2 losetup /dev/loop1 a1 losetup /dev/loop2 b1 losetup /dev/loop2 b2 # our src FS on 1 disk: mkfs.btrfs /dev/loop1 mkdir A B mount /dev/loop1 A # now we can fill it up, create subvolumes and snapshots... # at this point, we decide to make a clone of it. To do that, we # will make a snapshot of the device. For that, we need # temporary storage as a block device. That could be a disk # (like a USB key) or a nbd to another host, or anything. Here, # I''m going to use a loop device to a file. You need enough # space to store any modification done on the src FS while # you''re the transfer and what is needed to do the transfer # (I can''t tell you much about that). truncate -s 100M sa losetup /dev/loop4 sa umount A size=$(blockdev --getsize /dev/loop1) echo 0 "$size" /dev/loop1) snapshot-origin /dev/loop1 | dmsetup create a echo 0 "$size" snapshot /dev/loop1 /dev/loop4 N 8 | dmsetup create aSnap # now we have /dev/mapper/a as the src device which we can # remount as such and use: mount /dev/mapper/a A # and aSnap as a writable snapshot of the src device, which we # mount separately: mount /dev/mapper/aSnap B # The trick here is that we''re going to add the two new devices # to "B" and remove the snapshot one. btrfs will automatically # migrate the data to the new device: btrfs device add /dev/loop2 /dev/loop3 B btrfs device delete /dev/mapper/aSnap B # END Once that''s completed, you should have a copy of A in B. You may want to watch the status of the snapshot while you''re transfering to check that it doesn''t get full That method can''t be used to do some incremental "syncing" between two FS for which you''d still need something similar to "zfs send" (speaking of which, you may want to consider zfsonlinux which is now reaching a point where it''s about as stable as btrfs, same performance level if not better and has a lot more features. I''m doing the switch myself while waiting for btrfs to be a bit more mature) Because of the same uuid, the btrfs commands like filesystem show will not always give sensible outputs. I tried to rename the fsid by changing it in the superblocks, but it looks like it is alsa included in a few other places where changing it manually breaks some checksums, so I guess someone would have to write a tool to do that job. I''m surprised it doesn''t exist already (or maybe it does and I''m not aware of it?). -- Stephane -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 12/7/2011 1:49 PM, BJ Quinn wrote:> What I need isn''t really an equivalent "zfs send" -- my script can do > that. As I remember, zfs send was pretty slow too in a scenario like > this. What I need is to be able to clone a btrfs array somehow -- dd > would be nice, but as I said I end up with the identical UUID > problem. Is there a way to change the UUID of an array?No, btrfs send is exactly what you need. Using dd is slow because it copies unused blocks, and requires the source fs be unmounted and the destination be an empty partition. rsync is slow because it can''t take advantage of the btrfs tree to quickly locate the files (or parts of them) that have changed. A btrfs send would solve all of these issues. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>No, btrfs send is exactly what you need. Using dd is slow because it >copies unused blocks, and requires the source fs be unmounted and the >destination be an empty partition. rsync is slow because it can''t take >advantage of the btrfs tree to quickly locate the files (or parts of >them) that have changed. A btrfs send would solve all of these issues.Well, that depends. Using dd is slow if you have a large percentage of the drive unused. In my case, half or more of the drive is in use, and dd is about as efficient as is theoretically possible on the part of the drive that is in use. You''re right that it requires the drive to be unmounted and the destination to be an empty partition, but what I want to use dd for is to catch an empty drive up to being current and afterwards I''ll use my rsync script to keep it up to date with the latest snapshots. Maybe btrfs send will be more efficient, but in my experience with zfs send, dd was 10x faster unless your drive was nearly empty. At any rate, was someone saying that some work had already started on something like btrfs send? Or, alternatively, given that dd would be sufficient for my needs, is there any way to change the UUID of a btrfs partition after I''ve cloned it? -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 08.12.2011 17:07, BJ Quinn wrote:> At any rate, was someone saying that some work had already started on something like btrfs send?That''s right. -Jan -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
2011-12-08, 10:49(-05), Phillip Susi:> On 12/7/2011 1:49 PM, BJ Quinn wrote: >> What I need isn''t really an equivalent "zfs send" -- my script can do >> that. As I remember, zfs send was pretty slow too in a scenario like >> this. What I need is to be able to clone a btrfs array somehow -- dd >> would be nice, but as I said I end up with the identical UUID >> problem. Is there a way to change the UUID of an array? > > No, btrfs send is exactly what you need. Using dd is slow because it > copies unused blocks, and requires the source fs be unmounted.[...] Not necessarily, you can snapshot them (as in the method I suggested). If your FS is already on a device mapper device, you can even get away with not unmounting it (freeze, reload the device mapper table with a snapshot-origin one and thaw).> and the destination be an empty partition. rsync is slow > because it can''t take advantage of the btrfs tree to quickly > locate the files (or parts of them) that have changed. A > btrfs send would solve all of these issues.[...] When you want to clone a FS using a similar device or set of devices, a tool like clone2fs or ntfsclone that copies only the used sectors across sequentially would probably be a lot more efficient as it copies the data at the max speed of the drive, seeking as little as possible. -- Stephane -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>> At any rate, was someone saying that some work had already started on something like btrfs send?>That''s right.Google tells me that someone is you. :) What Google wouldn''t tell me though was whether you have something I could test? -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 08.12.2011 17:28, BJ Quinn wrote:>>> At any rate, was someone saying that some work had already started on something like btrfs send? > >> That''s right. > > Google tells me that someone is you. :) > > What Google wouldn''t tell me though was whether you have something I could test?Well, it''s telling you the right thing :-) Currently I''m distracted by reliable backref walking, which turned out to be a prerequisite of btrfs send. Once I have that thing done, direct work on the send/receive functionality will continue. As soon as there''s something that can be tested, you''ll find it on this list. -Jan -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thursday, 08 December, 2011 10:00:54 Stephane CHAZELAS wrote:> Because of the same uuid, the btrfs commands like filesystem > show will not always give sensible outputs. I tried to rename > the fsid by changing it in the superblocks, but it looks like it > is alsa included in a few other places where changing it > manually breaks some checksums, so I guess someone would have to > write a tool to do that job. I''m surprised it doesn''t exist > already (or maybe it does and I''m not aware of it?).The fs-uuid is recorded in the header of every tree block. From fs/btrfs/ctree.h [...] /* * every tree block (leaf or node) starts with this header. */ struct btrfs_header { /* these first four must match the super block */ u8 csum[BTRFS_CSUM_SIZE]; u8 fsid[BTRFS_FSID_SIZE]; /* FS specific uuid */ [...] Moreover I would be worried more about the uuid of the device than the filesystem one... -- gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack@inwind.it> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>As soon as there''s something that can be tested, you''ll find it on this list.Great, I''d love to try it. I spent a lot of time with ZFS and the zfs send/recv functionality was very convenient. Meanwhile, does anyone know how I can change the UUID of a btrfs partition or are there any other suggestions? -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Dec 08, 2011 at 01:56:59PM -0600, BJ Quinn wrote:> >As soon as there''s something that can be tested, you''ll find it on this list. > > Great, I''d love to try it. I spent a lot of time with ZFS and the zfs send/recv functionality was very convenient. > > Meanwhile, does anyone know how I can change the UUID of a btrfs partition or are there any other suggestions?You can''t change the uuid of an existing btrfs partition. Well, you can, but you have to rewrite all the metadata blocks. The performance problem you''re hitting is probably from metadata seeks all over the place. Jeff Liu has a new snapshot diffing tool in development that may make for less IO from rsync. Care to share you rsync script? -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> Care to share you rsync script?Sure. It''s a little raw, and makes some assumptions about my environment, but it does the job other than the fact that it takes weeks to run. :) In the below example, the "main" or source FS is mounted at "/mnt/btrfs", the "backup" or target FS at "/mnt/btrfsbackup", and this script is located at "/mnt/btrfs/backupscripts". I also run a slight variation of this script that makes it only "catch up" on snapshots that don''t already exist on the backup FS simply by commenting out the rsync command directly above the echo that says "Export resynced snapshot". If you don''t comment that out, then the script attempts to re-sync ALL snapshots, even ones that have already been copied over in a previous run, effectively double checking that everything has already been copied. This script is as efficient as I know how to make it, as it uses the --no-whole-file and --inplace rsync switches to prevent btrfs from thinking lots of blocks have changed that haven''t really changed and eating up lots of space. Also, an assumption is made that directly under /mnt/btrfs there are many subvolumes, and that the snapshots for these subvolumes are stored under /mnt/btrfs/snapshots/[subvol name]/[snap name]. Lastly, please note that I *DO* understand the purpose of the --bwlimit switch for rsync. I''ve run it without that and it still takes weeks. It''s only in there now because it seemed to prevent issues I was having where the whole system would lock up under heavy btrfs activity. I can''t remember if that was a problem I solved by switching out my SATA controller card or by upgrading my kernel, but I don''t believe I''m having that issue anymore. FWIW. #!/bin/bash # The following script is for exporting snapshots from one drive to another. # Putting the word "STOP" (all caps, without quotes) in stopexport.txt will abort # the process at the end of the current rsync job. DATE=`date +%Y%m%d` DATETIME=`date +%Y%m%d%H%M%S` SCRIPTSFOLDER="/mnt/btrfs/backupscripts" BACKUPFOLDER="/mnt/btrfs" EXTERNALDRIVE="/mnt/btrfsbackup" echo "Export Started `date`" echo # This will create all the snapshots in the original drive''s snapshots folder # on the export drive that don''t exist on the export drive. for PATHNAME in $BACKUPFOLDER/snapshots/* do if [ `cat $SCRIPTSFOLDER/stopexport.txt` = "STOP" ]; then echo "STOP" break fi SHARENAME=`basename $PATHNAME` btrfs subvolume create $EXTERNALDRIVE/$SHARENAME for SNAPPATH in $PATHNAME/* do echo $SNAPPATH SNAPNAME=`basename $SNAPPATH` if [ ! -d "$EXTERNALDRIVE/snapshots/$SHARENAME/$SNAPNAME" ]; then rsync -avvP --delete --bwlimit=20000 --ignore-errors --no-whole-file --inplace $SNAPPATH/ $EXTERNALDRIVE/$SHARENAME mkdir -p $EXTERNALDRIVE/snapshots/$SHARENAME btrfs subvolume snapshot $EXTERNALDRIVE/$SHARENAME $EXTERNALDRIVE/snapshots/$SHARENAME/$SNAPNAME echo "Export created snapshot $EXTERNALDRIVE/snapshots/$SHARENAME/$SNAPNAME" else rsync -avvP --delete --bwlimit=20000 --ignore-errors --no-whole-file --inplace $SNAPPATH/ $EXTERNALDRIVE/snapshots/$SHARENAME/$SNAPNAME echo "Export resynced snapshot $EXTERNALDRIVE/snapshots/$SHARENAME/$SNAPNAME" fi if [ `cat $SCRIPTSFOLDER/stopexport.txt` = "STOP" ]; then echo "STOP" break fi done; done; echo "Export Completed `date`" -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>You can''t change the uuid of an existing btrfs partition. Well, you >can, but you have to rewrite all the metadata blocks.Is there a tool that would allow me to rewrite all the metadata blocks with a new UUID? At this point, it can''t possibly take longer than the way I''m trying to do it now... Someone once said "Resetting the UUID on btrfs isn''t a quick-and-easy thing - you have to walk the entire tree and change every object. We''ve got a bad-hack in meego that uses btrfs-debug-tree and changes the UUID while it runs the entire tree, but it''s ugly as hell." Ok, I''ll take the bad-hack. How would I actually go about using said bad-hack? -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Monday, 12 December, 2011 15:41:29 you wrote:> >You can''t change the uuid of an existing btrfs partition. Well, you > >can, but you have to rewrite all the metadata blocks. > > Is there a tool that would allow me to rewrite all the metadata blocks with > a new UUID? At this point, it can''t possibly take longer than the way I''m > trying to do it now... > > Someone once said "Resetting the UUID on btrfs isn''t a quick-and-easy thing > - you have to walk the entire tree and change every object. We''ve got a > bad-hack in meego that uses btrfs-debug-tree and changes the UUID while it > runs the entire tree, but it''s ugly as hell."I am looking for that. btrfs-debug-tree is capable to dump every leaf and every node logical address. To change the UUID of a btrfs filesystem On every leaf/node we should - update the FSID (a) - update the chunk_uuid [*] - update the checksum for the "dev_item" items we should update the - device UUID (b) - FSID (see ''a'') for the "chunk_item" items we should update the - device UUID of every stripe (b) for every superblock (three for device), we should update: - FSID (see ''a'') - device uuid (see ''b'') - for every "system chunk" items contained in the superblock we should update: - device UUID of every stripe (b) - update the checksum The most complex part is to map the logical address to the physical device. In the next days I will tray (if I had enough time) to make something...> > Ok, I''ll take the bad-hack. How would I actually go about using said > bad-hack? > > -BJ > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html-- gpg key@ keyserver.linux.it: Goffredo Baroncelli (ghigo) <kreijack@inwind.it> Key fingerprint = 4769 7E51 5293 D36C 814E C054 BF04 F161 3DC5 0512 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Actually, I seem to be having problems where my rsync script ends up hanging the system again. It''s pretty repeatable, and the system is completely frozen and I have to do a hard reboot. Runs for a couple of hours and hangs the system every time. Of course, I''m not doing anything special other than an rsync of compressed btrfs data and snapshots. Well, that and my btrfs partitions are on external SATA port multipliers and btrfs is used to create a two drive RAID-0 for each partition (the source and the destination). I tried the bwlimit switch on rsync, which seemed to allow it to go longer between crashes, but of course that just means I''m copying the data slower too.... I can''t find anything in the usual logs. Any suggestions? I''m using CentOS 6.2 fully updated. -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Now I''ve managed to basically bring my system to its knees. My rsync script that takes weeks ends up bringing the system to a crawl long before it can ever finish. I end up with 100% of the CPU used up by the following as shown by top btrfs-endio-wri btrfs-delayed-m btrfs-transacti btrfs-delalloc- btrfs-endio-met Now, I''ve got a bunch of snapshots, and the server is a backup server that backs up all the machines on the network. It''s using -o compress. I''ve got a 6TB array of 2 3TB drives, that is now about 85% full. There''s lots of small files. I tried to add another drive, but it won''t ever finish a rebalance. Df shows all 9TB as part of the array, but only shows available space as if the array was 6TB. An attempt at copying all the data to a second array effectively brings the computer to its knees running the threads explained above. The server never really recovers until a hard reboot and can''t ever finish running a backup. Are there any mount options I should change? I need the compression and snapshots to have enough space. -BJ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 30 Dec 2011 11:25:58 AM BJ Quinn wrote:> Any suggestions? I''m using CentOS 6.2 fully updated.Are you using the 3.2 kernel as well ? The RHEL kernel probably has an old version of btrfs in it. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP