Ravishankar N
2016-Jul-15 15:36 UTC
[Gluster-users] lingering <gfid:*> entries in volume heal, gluster 3.6.3
On 07/15/2016 08:48 PM, Kingsley wrote:> I don't have star installed so I used ls,Oops typo. I meant `stat`.> but yes they all have 2 links > to them (see below). >Everything seems to be in place for the heal to happen. Can you tailf the output of shd logs on all nodes and manually launch gluster vol heal volname? Use DEBUG log level if you have to and examine the output for clues. Also, some dumb things to check: are all the bricks really up and is the shd connected to them etc. -Ravi> BTW, I noticed that the entries my script said didn't exist are actually > symlinks to other gfid entries that are directories. Most of these > target directories have 2 links, but one has 3 and one has 64. Anyway, > the files:
Kingsley
2016-Jul-15 16:02 UTC
[Gluster-users] lingering <gfid:*> entries in volume heal, gluster 3.6.3
On Fri, 2016-07-15 at 21:06 +0530, Ravishankar N wrote:> On 07/15/2016 08:48 PM, Kingsley wrote: > > I don't have star installed so I used ls, > Oops typo. I meant `stat`. > > but yes they all have 2 links > > to them (see below). > > > Everything seems to be in place for the heal to happen. Can you tailf > the output of shd logs on all nodes and manually launch gluster vol heal > volname? > Use DEBUG log level if you have to and examine the output for clues.I presume I can do that with this command: gluster volume set callrec diagnostics.brick-log-level DEBUG How can I find out what the log level is at the moment, so that I can put it back afterwards?> Also, some dumb things to check: are all the bricks really up and is the > shd connected to them etc.All bricks are definitely up. I just created a file on a client and it appeared in all 4 bricks. I don't know how to tell whether the shd is connected to all of them, though. Cheers, Kingsley.