After a few problems with power supply , we ended up with few split brains situations using a gluster 3.3.0. We've got 8 bricks having the data in a replicated distributed topology. We've got several files with split brain and while trying to fix one of them, it does not heal automatically. We've both deleted the file and the .gfs attr file. The problem is that the self-heal daemon recreates the file but does not copy the data. the flags are the following: on first brick: getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f # file: fd04a34a7aa503052503b65ab6eaea5f trusted.afr.storage-client-0=0x000000000000000000000000 trusted.afr.storage-client-1=0x000000000000000000000000 trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84 trusted.storage-stripe-0.stripe-count=0x3200 trusted.storage-stripe-0.stripe-index=0x3000 trusted.storage-stripe-0.stripe-size=0x31333130373200 on second brick: getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f # file: fd04a34a7aa503052503b65ab6eaea5f trusted.afr.storage-client-0=0x000000000000000000000000 trusted.afr.storage-client-1=0x000000000000000000000000 trusted.afr.storage-io-threads=0x000000000000000000000000 trusted.afr.storage-replace-brick=0x000000000000000000000000 trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84 trusted.storage-stripe-0.stripe-count=0x3200 trusted.storage-stripe-0.stripe-index=0x3000 trusted.storage-stripe-0.stripe-size=0x31333130373200 on the third brick which we want the data to be healed: getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f # file: fd04a34a7aa503052503b65ab6eaea5f trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84 on the fouth brick: getfattr -m . -e hex -d fd04a34a7aa503052503b65ab6eaea5f # file: fd04a34a7aa503052503b65ab6eaea5f trusted.afr.storage-client-2=0x000000010000000100000000 trusted.afr.storage-client-3=0x000000000000000000000000 trusted.afr.storage-io-threads=0x000000000000000000000000 trusted.afr.storage-replace-brick=0x000000000000000000000000 trusted.gfid=0xf0d12a323e6f434a9886371a3e425f84 trusted.storage-stripe-0.stripe-count=0x3200 trusted.storage-stripe-0.stripe-index=0x3100 trusted.storage-stripe-0.stripe-size=0x31333130373200 The problem, as far as I can see, are the flags set on trusted.afr.storage-client-2. Is there any documentation what each flag mean and how can we set the split-brain one so it's set to 0 and the self-healing daemon copies de data? As a side note, we've got around 150 files with similar issues. Is there any limit about the maximum files the self-healing daemon can handle? Would it be safe to manually copy the data from one brick to the other? Thanks a lot in advance, SAmuel. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130512/8580af2c/attachment.html>