Hi everyone, I've build gluster 3.3 from source, setup 3x node Replicate. ( fro serving VM images to kvm ) All seems to be working fine till the power went off for weekend...... brick 1 was run till battery flat and safely shutdown. then brick 2 done the same. brick 3 was running till ups has no more power ( fault in UPS - which cause power off for the whole room..) So when I came in the morning - find the problem , replace UPS and start all of them in single mode to find which one has the more resent data. Found that is the brick3. OK - started - left 2 other stay off till the end of business day - so can start sync when less loading. OK work hours ending and I decide to think brick1&2. ( done backup first ) power on 1 and 2 start glusterd from brick 3 initiate self-heal. The result 2 biggest file from 6 become corrupted. Now restoring them back from backup.... Any glue why is this happening? from logs I can't see that those 2 corrupted files was healing ( and check md5sum on all 3 brick show same - corrupted ) again from logs I can see only 3 files was healed. I can see some other users experienced similar problem: http://www.bauer-power.net/2012/03/glusterfs-is-not-ready-for-san-storage.html -- -- Michael