Okay, just some general suggestion: For the files that have link count 1 on the back-end on the 'bad' brick, remove the file from the back end on that brick and perform a stat on the file from the *mount* on the 'good' brick. This should create the file and the .glusterfs hard link on the bad brick too. If that works, you can do the same for all files. But you have to be sure that there are no pending heals on the parent directory before you do this or a reverse heal will happen (and the file will disappear from the good brick too!). If you are not sure, the safest way is to keep a copy of the file (from the good brick) elsewhere before doing this. On 01/29/2016 08:31 PM, Ronny Adsetts wrote:> Ravishankar N wrote on 29/01/2016 14:35: >> What version of gluster are you using? Was there a chance there were >> directory renames from the client? > Currently running 3.6.8-1 from gluster.org on Debian Wheezy, arch is amd64. My other reply has a history of the upgrades. > > The directory containing the bulk of the files (win_patches) has not been renamed since files were copied there a few weeks ago when the volume was created. There are affected files all over the volume. > > The gluster volumes are not really being used in anger yet. This 'software' volume is being used by our patch management software via the samba-shared fuse-mounted volume. I had noticed problems within the app when running "3.2.7-3+deb7u1~bpo60+1" from Debian Squeeze but had not investigated as the system upgrade was pending anyway. > > It's possible that node reboots have coincided with the patch management software running scheduled patch downloads though not to the extent that ~50% of the node files are is some sort of indeterminate state. > >> There was a bug which Pranith fixed quite some time back: >> http://review.gluster.org/#/c/7879/ for missing .glusterfs link >> files. > Ronny
Ravishankar N wrote on 29/01/2016 15:48:> Okay, just some general suggestion: > > For the files that have link count 1 on the back-end on the 'bad' > brick, remove the file from the back end on that brick and perform a > stat on the file from the *mount* on the 'good' brick. This should > create the file and the .glusterfs hard link on the bad brick too. > If that works, you can do the same for all files. > > But you have to be sure that there are no pending heals on the parent > directory before you do this or a reverse heal will happen (and the > file will disappear from the good brick too!). If you are not sure, > the safest way is to keep a copy of the file (from the good brick) > elsewhere before doing this.Thanks. There are still issues with some files so I'll look at doing this. None of the files on this volume are irreplaceable so no biggie. Just trying to get comfortable enough with Gluster to throw all our data on it. Not actually lost any files yet which is good. :-). As far as checking for pending heals, is "gluster volume heal software info" sufficient? Thanks. Ronny -- Ronny Adsetts Technical Director Amazing Internet Ltd, London t: +44 20 8977 8943 w: www.amazinginternet.com Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ Registered in England. Company No. 4042957 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: OpenPGP digital signature URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160129/34779a3b/attachment.sig>