search for: subrahmanya

Displaying 20 results from an estimated 67 matches for "subrahmanya".

Did you mean: subrahmanyam
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> > wrote: > >> Hi Karthik, >> >> >> Thanks a lot for the explanation. >> >> Does it mean a distributed volume health can b...
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. &gt...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...this option will be reduced to, for > example, 5%? Could you point to any best practice document(s)? > Yes you can decrease it to any value. There won't be any side effect. Regards, Karthik > > Regards, > > Anatoliy > > > > > > On 2018-03-13 16:46, Karthik Subrahmanya wrote: > > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>" to succeed, you should > have the self-heal-daemon running, > which is tru...
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > If you want to use the first two bricks as arbiter, then you need to be > aware of the following things: > - Your distribution count will be decreased to 2. What's the significance of this? I'm trying to find documentation on distribution counts in gluster, but my google...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on the guidelines in the doc [1], you can do it live and you > will > > > > have the distribu...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm trying to find documentation on > distribution cou...
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > &g...
2017 Oct 24
3
gfid entries in volume heal info that do not heal
...to > where? /, <path from top of brick>, ? > > On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > > In my case I was able to delete the hard links in the .glusterfs folders > of the bricks and it seems to have done the trick, thanks! > > > > *From:* Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > *Sent:* Monday, October 23, 2017 1:52 AM > *To:* Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> > *Cc:* gluster-users <Gluster-users at gluster.org> > *Subject:* Re: [Gluster-users] gfid entries in volume he...
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By...
2017 Oct 24
0
gfid entries in volume heal info that do not heal
...ake some time. Can't bring the up machine offline as it's in use. At least I have 24 cores to work with. I've only tested with one GFID but the file it referenced _IS_ on the down machine even though it has no GFID in the .glusterfs structure. On Tue, 2017-10-24 at 12:35 +0530, Karthik Subrahmanya wrote: > Hi Jim, > > Can you check whether the same hardlinks are present on both the > bricks & both of them have the link count 2? > If the link count is 2 then "find <brickpath> -samefile > <brickpath/.glusterfs/<first two > bits of gfid>/<next...
2017 Oct 23
0
gfid entries in volume heal info that do not heal
...name and if so relative to where? /, <path from top of brick>, ? On Mon, 2017-10-23 at 18:54 +0000, Matt Waymack wrote: > In my case I was able to delete the hard links in the .glusterfs > folders of the bricks and it seems to have done the trick, thanks! > > > From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] > > > Sent: Monday, October 23, 2017 1:52 AM > > To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.c > om> > > Cc: gluster-users <Gluster-users at gluster.org> > > Subject: Re: [Gluster-user...
2018 Apr 12
2
Turn off replication
...etwork Analyst 1 > Center of Advanced Research Computing > 1601 Central Ave > <https://maps.google.com/?q=1601+Central+Ave&entry=gmail&source=g>. > MSC 01 1190 > Albuquerque, NM 87131-0001 > carc.unm.edu > 575.636.4232 > > On Apr 7, 2018, at 8:29 AM, Karthik Subrahmanya <ksubrahm at redhat.com> > wrote: > > Hi Jose, > > Thanks for providing the volume info. You have 2 subvolumes. Data is > replicated within the bricks of that subvolumes. > First one consisting of Node A's brick1 & Node B's brick1 and the second > one consi...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibility of > alternating outages. Thanks! > > > > > It would be great if you can consider...
2017 Oct 23
2
gfid entries in volume heal info that do not heal
In my case I was able to delete the hard links in the .glusterfs folders of the bricks and it seems to have done the trick, thanks! From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Monday, October 23, 2017 1:52 AM To: Jim Kinney <jim.kinney at gmail.com>; Matt Waymack <mwaymack at nsgdv.com> Cc: gluster-users <Gluster-users at gluster.org> Subject: Re: [Gluster-users] gfid entries in volume heal info that do not heal Hi...
2018 Feb 05
0
Dir split brain resolution
...seems that I deleted some files that shouldn't and the ovirt engine hosted on this volume was not able to start. Now I am setting up the engine from scratch... In case I see this kind of split brain again I will get back before I start deleting :) Alex On Mon, Feb 5, 2018 at 2:34 PM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > I am wondering why the other brick is not showing any entry in split brain > in the heal info split-brain output. > Can you give the output of stat & getfattr -d -m . -e hex > <file-path-on-brick> from both the bricks. &gt...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...-arbiter/testset/306/30677af808ad578916f54783904e6342.pack trusted.gfid=0xe46e9a655128456bba0d98568d432717 Is it okay that only gfid info is available on the arbiter brick? -- Best Regards, Seva Gluschenko CTO @ http://webkontrol.ru (http://webkontrol.ru/) February 9, 2018 2:01 PM, "Karthik Subrahmanya" <ksubrahm at redhat.com (mailto:%22Karthik%20Subrahmanya%22%20<ksubrahm at redhat.com>)> wrote: On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko <gvs at webkontrol.ru (mailto:gvs at webkontrol.ru)> wrote: Hi Karthik, Thank you for your reply. The heal is still undergoing, as...
2017 Nov 06
0
gfid entries in volume heal info that do not heal
...c/net/dev -- eth3" output[ NOT OK] Errors seen in "cat /proc/net/dev -- mlx_ib0" output[ NOT OK] Errors seen in "cat /proc/net/dev -- enp131s0" outputHigh CPU usage by Self-heal NOTE: Bmidata2 up for over 300 days. due for reboot. On Tue, 2017-10-24 at 12:35 +0530, Karthik Subrahmanya wrote: > Hi Jim, > > Can you check whether the same hardlinks are present on both the > bricks & both of them have the link count 2? > If the link count is 2 then "find <brickpath> -samefile > <brickpath/.glusterfs/<first two > bits of gfid>/<next...
2018 Apr 25
0
Turn off replication
...root at gluster02 glusterfs]# --------------------------------- Jose Sanchez Systems/Network Analyst 1 Center of Advanced Research Computing 1601 Central Ave. MSC 01 1190 Albuquerque, NM 87131-0001 carc.unm.edu <http://carc.unm.edu/> 575.636.4232 > On Apr 12, 2018, at 12:11 AM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > > On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: > Hi Karthik > > Looking at the information you have provided me, I would like to make sure that I?m r...
2018 Apr 25
2
Turn off replication
...----------------- > Jose Sanchez > Systems/Network Analyst 1 > Center of Advanced Research Computing > 1601 Central Ave. > MSC 01 1190 > Albuquerque, NM 87131-0001 > carc.unm.edu <http://carc.unm.edu/> > 575.636.4232 > >> On Apr 12, 2018, at 12:11 AM, Karthik Subrahmanya <ksubrahm at redhat.com <mailto:ksubrahm at redhat.com>> wrote: >> >> >> >> On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu <mailto:josesanc at carc.unm.edu>> wrote: >> Hi Karthik >> >> Looking at the info...
2017 Sep 28
1
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
The version I am using is glusterfs 3.6.9 Best regards, Cynthia ???? MBB SM HETRAN SW3 MATRIX Storage Mobile: +86 (0)18657188311 From: Karthik Subrahmanya [mailto:ksubrahm at redhat.com] Sent: Thursday, September 28, 2017 2:37 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com> Cc: Gluster-users at gluster.org; gluster-devel at gluster.org Subject: Re: [Gluster-users] after hard reboot, split-brain happened, but nothing s...