I have a filesystem that I can't seem to resolve ENOSPC issues. No write operation can succeed; I've tried the wiki's suggestions (balancing, which fails because of ENOSPC, mounting with nodatacow, clear_cache, nospace_cache, enospc_debug, truncating files, deleting files, briefly microwaving the drive, etc). btrfs show: Label: none uuid: 04283a32-b388-480b-9949-686675fad7df Total devices 1 FS bytes used 135.58GiB devid 1 size 238.22GiB used 238.22GiB path /dev/sdb2 btrfs fi df: Data, single: total=234.21GiB, used=131.82GiB System, single: total=4.00MiB, used=48.00KiB Metadata, single: total=4.01GiB, used=3.76GiB So, the filesystem is pretty much unusable, and I can find no way to resuscitate it. I ended up in this state by creating a snapshot of the root of the fs into a read/write subvolume, which I wanted to become my new root, then began deleting entries in the filesystem itself outside of the new snapshot. So nothing particularly weird or crazy. The only oddness is the file count -- I have a *lot* of hardlinked files (this is an rsnapshot volume, so it has a large number of files and many of them are hard linked). It seems like the normal solution is btrfs balance, but that fails. defragment also fails. Kernel is 3.13. Is there anything else I can or should do, or just wipe it and recreate with perhaps better initial defaults? If this kind of thing is unavoidable, how might I have anticipated it and prevented it? Fortunately this was a migration effort and so my original data is safe inside of ext4 (ew). -- Chip Turner - cturner@pattern.net -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html