I will try to explain what I have done, but also try to keep it fairly short. I installed a Linux distro that does not support installing to btrfs to an ext4. I ran dist-upgrade to ensure that I have the latest btrfs-tools. I upgraded the Debian kernel from 3.13 to 3.16.3. When all this was completed, there was something like 900 MB used on a 40 GB partition. Next, I booted to another distro (Arch Linux) which also has the latest kernel and btrfs-progs. I ran "btrfs-convert /dev/sda6". When I rebooted to the new Debian system, the btrfs was mounted read-only. "btrfs fi show /" showed all 40 GB as used. I did some internet research, then I remounted the filesystem as rw and added another 40 GB partition on a separate disk drive. Then I ran "btrfs balance start -dusage=30". This seemed to stabilize the filesystem to the point that it is usable. I proceeded with my original plan, which was to make it a two-drive RAID filesystem, using "-dconvert =raid0 -mconvert=raid1". This succeeded, but the data and metadata usage stats still look all out of whack. After several rebalance attempts, my usage stats look like the following: "btrfs fi show /" shows a total usage of 1.76 GB, with 40 GB allocated and 14.03 GB used on each device. "btrfs fi df /" shows total data of 2 GB allocated with 1.69 GB used and metadata of 13 GB total with 72.41 MB used. Why is 13 GB needed for 72 MB of metadata? Is there any understandable way to fix this? I am not a newbie, but am by no means an expert with btrfs Thank you, Tim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html