search for: waymack

Displaying 17 results from an estimated 17 matches for "waymack".

2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 23, 2017 1:52 AM To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re: [Gluster-users] gfid entries in volume heal info that do not heal Hi Jim & Matt, Can you also check for the link count in the stat output of those hardlink entries in the .glusterfs folder on the...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...path to the original. I have the inode from stat. Looking now to dig out the path/filename from xfs_db on the specific inodes individually. Is the hash of the filename or <path>/filename and if so relative to where? /, <path from top of brick>, ? On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > In my case I was able to delete the hard links in the .glusterfs > folders of the bricks and it seems to have done the trick, thanks! > > > From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > > > Sent: Monday, October 23, 2017 1:52 AM > > To: Jim...
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...the inode from stat. Looking now to dig out the path/filename from > xfs_db on the specific inodes individually. > > Is the hash of the filename or <path>/filename and if so relative to > where? /, <path from top of brick>, ? > > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > > In my case I was able to delete the hard links in the .glusterfs folders > of the bricks and it seems to have done the trick, thanks! > > > > *From:* Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > *Sent:* Monday, October 23, 2017 1:52 AM > *To:* Jim Kinn...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
...to dig out the > > path/filename from xfs_db on the specific inodes individually. > > > > Is the hash of the filename or <path>/filename and if so relative > > to where? /, <path from top of brick>, ? > > > > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > > > In my case I was able to delete the hard links in the .glusterfs > > > folders of the bricks and it seems to have done the trick, > > > thanks! > > > > > > > > > From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > >...
2017 Oct 17
3
gfid entries in volume heal info that do not heal
...;brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume heal <volname>" once and send the shd log. And the output of "gluster volume heal <volname> info split-brain" Regards, Karthik On Mon, Oct 16, 2017 at 9:51 PM, Matt Waymack <mwaymack at nsgdv.com> wrote: > OK, so here?s my output of the volume info and the heal info. I have not > yet tracked down physical location of these files, any tips to finding them > would be appreciated, but I?m definitely just wanting them gone. I forgot > to mention earlie...
2017 Oct 19
2
gfid entries in volume heal info that do not heal
...ntos 7.3 Should I just remove the contents of the .glusterfs folder on both and restart gluster and run a ls/stat on every file? When I run a heal, it no longer has a decreasing number of files to heal so that's an improvement over the last 2-3 weeks :-) On Tue, 2017-10-17 at 14:34 +0000, Matt Waymack wrote: > Attached is the heal log for the volume as well as the shd log. > > > > Run these commands on all the bricks of the replica pair to get > > > the attrs set on the backend. > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . > /exp/b1/gv0/.glus...
2017 Oct 18
1
gfid entries in volume heal info that do not heal
...ume heal. 4. Check the heal info output to see whether the file got healed. If one file gets healed, then do step 1 & 2 for the rest of the files and do step 3 & 4 once at the end. Let me know if that resolves the issue. Thanks & Regards, Karthik On Tue, Oct 17, 2017 at 8:04 PM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Attached is the heal log for the volume as well as the shd log. > > >> Run these commands on all the bricks of the replica pair to get the > attrs set on the backend. > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . >...
2017 Oct 17
0
gfid entries in volume heal info that do not heal
...ck tpc-cent-glus2-081017:/exp/b4/gv0 Status: Connected Number of entries in split-brain: 0 Brick tpc-arbiter1-100617:/exp/b4/gv0 Status: Connected Number of entries in split-brain: 0 -Matt From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Tuesday, October 17, 2017 1:26 AM To: Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re: [Gluster-users] gfid entries in volume heal info that do not heal Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replic...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...contents of the .glusterfs folder on both and > restart gluster and run a ls/stat on every file? > > > When I run a heal, it no longer has a decreasing number of files to heal > so that's an improvement over the last 2-3 weeks :-) > > On Tue, 2017-10-17 at 14:34 +0000, Matt Waymack wrote: > > Attached is the heal log for the volume as well as the shd log. > > > > Run these commands on all the bricks of the replica pair to get the attrs set on the backend. > > > > [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10...
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...to dig out the > > path/filename from xfs_db on the specific inodes individually. > > > > Is the hash of the filename or <path>/filename and if so relative > > to where? /, <path from top of brick>, ? > > > > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > > > In my case I was able to delete the hard links in the .glusterfs > > > folders of the bricks and it seems to have done the trick, > > > thanks! > > > > > > > > > From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > >...
2017 Oct 16
2
gfid entries in volume heal info that do not heal
...e info <volname> gluster volume heal <volname> info And also the getfattr output of the files which are in the heal info output from all the bricks of that replica pair. getfattr -d -e hex -m . <file path on brick> Thanks & Regards Karthik On 16-Oct-2017 8:16 PM, "Matt Waymack" <mwaymack at nsgdv.com> wrote: Hi all, I have a volume where the output of volume heal info shows several gfid entries to be healed, but they?ve been there for weeks and have not healed. Any normal file that shows up on the heal info does get healed as expected, but these gfid entr...
2017 Oct 16
0
gfid entries in volume heal info that do not heal
...t;gfid:d43284d4-86aa-42ff-98b8-f6340b407d9d> Status: Connected Number of entries: 24 Brick tpc-arbiter1-100617:/exp/b4/gv0 Status: Connected Number of entries: 0 Thank you for your help! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 16, 2017 10:27 AM To: Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re: [Gluster-users] gfid entries in volume heal info that do not heal Hi Matt, The files might be in split brain. Could you please send the outputs of these? gluster volume info <volname> gluster v...
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is l...
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2017 Dec 19
0
How to make sure self-heal backlog is empty ?
Mine also has a list of files that seemingly never heal. They are usually isolated on my arbiter bricks, but not always. I would also like to find an answer for this behavior. -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hoggins! Sent: Tuesday, December 19, 2017 12:26 PM To: gluster-users <gluster-users at
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list, I'm not sure what to look for here, not sure if what I'm seeing is the actual "backlog" (that we need to make sure is empty while performing a rolling upgrade before going to the next node), how can I tell, while reading this, if it's okay to reboot / upgrade my next node in the pool ? Here is what I do for checking : for i in `gluster volume list`; do
2017 Oct 16
0
gfid entries in volume heal info that do not heal
Hi all, I have a volume where the output of volume heal info shows several gfid entries to be healed, but they've been there for weeks and have not healed. Any normal file that shows up on the heal info does get healed as expected, but these gfid entries do not. Is there any way to remove these orphaned entries from the volume so they are no longer stuck in the heal process? Thank you!