Adam Ryczkowski
2013-Feb-18 10:37 UTC
btrfs send & receive produces "Too many open files in system"
I believe what I am going to write is a bug report. When I finaly did # btrfs send -v /mnt/adama-docs/backups/20130101-192722 | btrfs receive /mnt/tmp/backups to migrate btrfs from one partition layout to another. After a while system keeps saying that "Too many open files in system" and denies access to almost every command line tool. When I had access to iostat I confirmed the correct pattern of disk activity (i.e. reads from all devices that make /mnt/adama-docs , and writes to all devices that make /mnt/tmp). Now, that system is almost unusable, the HDD LEDs are still blinking in the same pattern as they did when I confirmed the pattern of disk activity. When I canceled the send & receive process, everything went back to normal. I use Ubuntu Quantal with the latest 3.7.8 kernel, latest btrfs tools (v0.20 -rc1) downloaded from git. The btrfs filesystem /mnt/adama-docs sits on top of lvm2 logical volume, which sits on top of cryptsetup Luks device which subsequentely sits on top of mdadm RAID-6 spanning a partition on each of 4 hard drives (I know that it is sub-optimal setup). The backups/20130101-192722 is a read-only snaphot which I estimate contain ca. 100GB data. The /mnt/tmp/backups is btrfs multidevice raid10 filesystem, which is based on 4 cryptsetup Luks devices, each live as a separate partition on the same 4 physical hard drives that ultimately make the /mnt/adama-docs. Both btrfs filesystems are mounted with -o compress, and the /mnt/adama-docs is also mounted with noatime. I suspect that it may be some type of race condition, because my setup is so highly inefficient (I''ve got only about 8MB/sek read (and the same speed of write) from each of all 4 hard drives). The problem is perfectly reproducible on my setup. I''m ready to assist with whatever info you need to troubleshoot this problem. -- Adam Ryczkowski +48505919892 Skype:sisteczko -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Brendan Hide
2013-Mar-03 23:36 UTC
Re: btrfs send & receive produces "Too many open files in system"
On 2013/02/18 12:37 PM, Adam Ryczkowski wrote:> ... > to migrate btrfs from one partition layout to another. > ... > <source> sits on top of lvm2 logical volume, which sits on top of > cryptsetup Luks device which subsequentely sits on top of mdadm RAID-6 > spanning a partition on each of 4 hard drives ... is a read-only > snaphot which I estimate contain ca. 100GB data. > ... > <destination> is btrfs multidevice raid10 filesystem, which is based > on 4 cryptsetup Luks devices, each live as a separate partition on the > same 4 physical hard drives ... > ... > about 8MB/sek read (and the same speed of write) from each of all 4 > hard drives). >I hope you''ve solved this already - but if not: The unnecessarily complex setup aside, a 4-disk RAID6 is going to be slow - most would have gone for a RAID10 configuration, albeit that it has less redundancy. Another real problem here is that you are copying data from these disks to themselves. This means that for every read and write that all four of the disks have to do two seeks. This is time-consuming of the order of 7ms per seek depending on the disks you have. The way to avoid these unnecessary seeks is to first copy the data to a separate unrelated device and then to copy from that device to your final destination device. To increase RAID6 write performance (Perhaps irrelevant here) you can try optimising the stripe_cache_size value. It can use a ton of memory depending on how large a stripe cache setting you end up with. Search online for "mdraid stripe_cache_size". To increase the read performance you can try optimising the md arrays'' readahead. As above, search online for "blockdev setra". This should hopefully make a noticeable difference. Good luck. -- __________ Brendan Hide http://swiftspirit.co.za/ http://www.webafrica.co.za/?AFF1E97 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html