Nope, it's not necessary for them to all have the xattr. Do you see anything at least in .glusterfs/indices/dirty on all bricks? -Krutika On Sun, Apr 24, 2016 at 4:17 AM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote:> On 24/04/2016 1:24 AM, Krutika Dhananjay wrote: > >> Each shard is also associated with a gfid. >> >> So do you see the gfids of these 8 shards in the >> .glusterfs/indices/xattrop directory on any of the bricks? >> > > > There were no files at all in the xattrop dir. > > Also I did a spot check of the extended attr of the shards in question and > they did not look right (as I understand them). Shouldn't > trusted.afr.datastore4-client-0/1/2 be set for all shards (excluding their > brick index) > > ssh root at vng getfattr -d -m . -e hex > /tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > # file: > tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > trusted.afr.datastore4-client-2=0x000000000000000000000000 > trusted.afr.dirty=0x000000000000000000000000 > trusted.bit-rot.version=0x0400000000000000571632100008cfbf > trusted.gfid=0xac4dc6375d6a4b0c90763ffb5314a5a3 > > getfattr: Removing leading '/' from absolute path names > root at vna:~# ssh root at vna getfattr -d -m . -e hex > /tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > # file: > tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > trusted.afr.dirty=0x000000000000000000000000 > trusted.bit-rot.version=0x0500000000000000571626b8000316f0 > trusted.gfid=0xac4dc6375d6a4b0c90763ffb5314a5a3 > > getfattr: Removing leading '/' from absolute path names > root at vna:~# ssh root at vnb getfattr -d -m . -e hex > /tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > # file: > tank/vmdata/datastore4/.shard/744c5059-303d-4e82-b5be-0a5f53b1aeff.1172 > trusted.afr.datastore4-client-2=0x000000000000000000000000 > trusted.afr.dirty=0x000000000000000000000000 > trusted.bit-rot.version=0x040000000000000057160faa000e0632 > trusted.gfid=0xac4dc6375d6a4b0c90763ffb5314a5a3 > > -- > Lindsay Mathieson > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160424/36beb590/attachment.html>
On 24/04/2016 2:56 PM, Krutika Dhananjay wrote:> Nope, it's not necessary for them to all have the xattr.Thats good :)> > Do you see anything at least in .glusterfs/indices/dirty on all bricks?I checked, dirty dir empty on all bricks I used diff3 to compare the checksums of the shards and it revealed that seven of the shards were the same on two bricks (vna & vng) and one of the shards was the same on two other bricks (vna & vnb). Fortunately none were different on all 3 bricks :) Using the checksum as a quorum I deleted all the singleton shards (7 on vnb, 1 on vng), touched the file owner and issule a "heal full". All 8 shards were restored with matching checksums for the other two bricks. A rechack of the entire set of shards for the vm showed all 3 copies as identical and the VM itself is functioning normally. Its one way to manually heal up shard mismatches which gluster hasn't detected, if somewhat tedious. Its a method which lends itself to automation though. Cheers, -- Lindsay Mathieson