Just an idea: btrfs Problem: I've had two systems die with huge load factors >100(!) for the case where a user program has unexpected to me been doing 'database'-like operations and caused multiple files to become heavily fragmented. The system eventually dies when data cannot be added to the fragmented files faster than the real time data collection. My example case is for two systems with btrfs raid1 using two HDDs each. Normal write speed is about 100MByte/s. After heavy fragmentation, the cpus are at 100% wait and i/o is a few hundred kByte/s. Possible fix: btrfs checks the ratio of filesize versus number of fragments and for a bad ratio either: 1: Performs a non-cow copy to defragment the file; 2: Turns off cow for that file and gives a syslog warning for that; 3: Automatically defragments the file. Or? For my case, I'm not sure "2" is a good idea in case the user is rattling through a gazillion files and the syslog gets swamped. Unfortunately, I don't know beforehand what files to mark no-cow unless I no-cow the entire user/applications. Thoughts? Thanks, Martin -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html