Hi, I have been reading a lot of articles online about the dangers of using ZFS with non-ECC RAM. Specifically, the fact that when good data is read from disk and compared with its checksum, a RAM error can cause the read data to be incorrect, causing a checksum failure, and the bad data might now be written back to the disk in an attempt to correct it, corrupting it in the process. This would be exacerbated by a scrub, which could run through all your data and potentially corrupt it. There is a strong current of opinion that using ZFS without ECC RAM is "suicide for your data". I have been unable to find any discussion of the extent to which this is true for btrfs. Does btrfs handle checksum errors in the same way as ZFS, or does it perform additional checks before writing "corrected" data back to disk? For example, if it detects a checksum error, it could read the data again to a different memory location to determine if the error existed in the disk copy or the memory. From what I've been reading, it sounds like ZFS should not be used with non-ECC RAM. This is reasonable, as ZFS' resource requirements mean that you probably only want to run it on server-grade hardware anyway. But with btrfs eventually being the default filesystem for Linux, that would mean that all linux machines, even cheap consumer-grade hardware, would need ECC RAM, or forego many of the advantages of btrfs. What is the situation? -- Ian Hinder http://numrel.aei.mpg.de/people/hinder -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html