search for: pkarampu

Displaying 20 results from an estimated 70 matches for "pkarampu".

2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
Thanks for the swift turn around. Will try this out and let you know. Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Monday, July 10, 2017 8:31 AM To: Sanoj Unnikrishnan Cc: Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost Ram, If you see it again, you can use this....
2017 Jul 13
0
[Gluster-devel] gfid and volume-id extended attributes lost
...he reason for this issue. On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy <areddy at commvault.com> wrote: > Thanks for the swift turn around. Will try this out and let you know. > > > > Thanks and Regards, > > Ram > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Monday, July 10, 2017 8:31 AM > *To:* Sanoj Unnikrishnan > *Cc:* Ankireddypalle Reddy; Gluster Devel (gluster-devel at gluster.org); > gluster-users at gluster.org > > *Subject:* Re: [Gluster-devel] gfid and volume-id extended attributes lost > > &gt...
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
...ranith , yes . we can get the pid on all removexattr call and also >> print the backtrace of the glusterfsd process when trigerring removing >> xattr. >> I will write the script and reply back. >> >> On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri < >> pkarampu at redhat.com> wrote: >> >>> Ram, >>> As per the code, self-heal was the only candidate which *can* do >>> it. Could you check logs of self-heal daemon and the mount to check if >>> there are any metadata heals on root? >>> >>> &...
2017 Jul 10
2
[Gluster-devel] gfid and volume-id extended attributes lost
...com> wrote: > @ pranith , yes . we can get the pid on all removexattr call and also > print the backtrace of the glusterfsd process when trigerring removing > xattr. > I will write the script and reply back. > > On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri < > pkarampu at redhat.com> wrote: > >> Ram, >> As per the code, self-heal was the only candidate which *can* do >> it. Could you check logs of self-heal daemon and the mount to check if >> there are any metadata heals on root? >> >> >> +Sanoj >> >...
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
...ker at redhat.com] On Behalf Of John Mark Walker Sent: Thursday, September 05, 2013 1:06 PM To: Pranith Kumar Karampuri Cc: Song; gluster-devel at nongnu.org Subject: Re: [Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6) Posting to gluster-users. ----- Pranith Kumar Karampuri <pkarampu at redhat.com> wrote: > Song, > Seems like the issue is happening because of double 'memput', Could you let us know the steps to re-create the issue? Or the load that may lead to this? > > Pranith > > ----- Original Message ----- > > From: "Song" <g...
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
3.7.19 Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Friday, July 07, 2017 11:54 AM To: Ankireddypalle Reddy Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost On Fri, Jul 7, 2017 at 9:20 PM, Ankireddypalle Reddy <areddy at...
2017 Jul 07
2
[Gluster-devel] gfid and volume-id extended attributes lost
...arting the > volume the attributes were again lost. Had to stop glusterd set attributes > and then start glusterd. After that the volume start succeeded. > Which version is this? > > > Thanks and Regards, > > Ram > > > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Friday, July 07, 2017 11:46 AM > *To:* Ankireddypalle Reddy > *Cc:* Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org > *Subject:* Re: [Gluster-devel] gfid and volume-id extended attributes lost > > > > Did anything special h...
2017 Jul 07
3
[Gluster-devel] gfid and volume-id extended attributes lost
...y be that will give more clues. PS: sys_fremovexattr is called only from posix_fremovexattr(), so that doesn't seem to be the culprit as it also have checks to guard against gfid/volume-id removal. > > Thanks and Regards, > > Ram > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Friday, July 07, 2017 11:54 AM > > *To:* Ankireddypalle Reddy > *Cc:* Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org > *Subject:* Re: [Gluster-devel] gfid and volume-id extended attributes lost > > > > > > > &g...
2017 Jul 07
0
[Gluster-devel] gfid and volume-id extended attributes lost
...k9/ws_brick Options Reconfigured: performance.readdir-ahead: on diagnostics.client-log-level: INFO auth.allow: glusterfs1sds,glusterfs2sds,glusterfs3sds,glusterfs4sds.commvault.com,glusterfs5sds.commvault.com,glusterfs6sds.commvault.com Thanks and Regards, Ram From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Friday, July 07, 2017 12:15 PM To: Ankireddypalle Reddy Cc: Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost On Fri, Jul 7, 2017 at 9:25 PM, Ankireddypalle Reddy <areddy at...
2017 Jul 08
2
[Gluster-devel] gfid and volume-id extended attributes lost
...; > diagnostics.client-log-level: INFO > > auth.allow: glusterfs1sds,glusterfs2sds,glusterfs3sds,glusterfs4sds. > commvault.com,glusterfs5sds.commvault.com,glusterfs6sds.commvault.com > > > > Thanks and Regards, > > Ram > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Friday, July 07, 2017 12:15 PM > > *To:* Ankireddypalle Reddy > *Cc:* Gluster Devel (gluster-devel at gluster.org); gluster-users at gluster.org > *Subject:* Re: [Gluster-devel] gfid and volume-id extended attributes lost > > > > > > > &g...
2017 Jul 10
0
[Gluster-devel] gfid and volume-id extended attributes lost
@ pranith , yes . we can get the pid on all removexattr call and also print the backtrace of the glusterfsd process when trigerring removing xattr. I will write the script and reply back. On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <pkarampu at redhat.com > wrote: > Ram, > As per the code, self-heal was the only candidate which *can* do > it. Could you check logs of self-heal daemon and the mount to check if > there are any metadata heals on root? > > > +Sanoj > > Sanoj, > Is there any sy...
2018 Mar 13
1
Expected performance for WORM scenario
...has a feature called write-behind which does that. > > Summary: If you do not need to scale out, stick with a single server > (+DRBD optionally for HA), it will give you the best performance > > > > Ondrej > > > > > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Tuesday, March 13, 2018 9:10 AM > > *To:* Ondrej Valousek <Ondrej.Valousek at s3group.com> > *Cc:* Andreas Ericsson <andreas.ericsson at findity.com>; > Gluster-users at gluster.org > *Subject:* Re: [Gluster-users] Expected performance for WORM...
2018 Mar 14
2
Expected performance for WORM scenario
...Glusterfs does not support async writes > > > > Summary: If you do not need to scale out, stick with a single server > (+DRBD optionally for HA), it will give you the best performance > > > > Ondrej > > > > > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Tuesday, March 13, 2018 9:10 AM > > *To:* Ondrej Valousek <Ondrej.Valousek at s3group.com> > *Cc:* Andreas Ericsson <andreas.ericsson at findity.com>; > Gluster-users at gluster.org > *Subject:* Re: [Gluster-users] Expected performance for WORM...
2017 Jul 24
0
Bug 1473150 - features/shard:Lookup on shard 18 failed. Base file gfid = b00f5de2-d811-44fe-80e5-1f382908a55a [No data available], the [No data available]
...> I am waiting your good news! > > Thank you for your hard work! > > ------------------------------ > zhangjianwei1216 at 163.com > > > *From:* Krutika Dhananjay <kdhananj at redhat.com> > *Date:* 2017-07-20 17:34 > *To:* Pranith Kumar Karampuri <pkarampu at redhat.com> > *CC:* ??? <zhangjianwei1216 at 163.com> > *Subject:* Re: Bug 1473150 - features/shard:Lookup on shard 18 failed. > Base file gfid = b00f5de2-d811-44fe-80e5-1f382908a55a [No data > available], the [No data available] > Hi ???, > > Thanks for your email....
2017 Jun 13
0
How to remove dead peer, osrry urgent again :(
On 13 June 2017 at 02:56, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote: > We can also do "gluster peer detach <hostname> force right? Just to be sure I setup a test 3 node vm gluster cluster :) then shut down one of the nodes and tried to remove it. root at gh1:~# gluster peer status Number of Peers: 2 Hostname: gh2.brian.s...
2018 Mar 14
0
Expected performance for WORM scenario
...tings at all can reduce performance to 1/5000 of what I get when writing straight to ramdisk though, and especially when running on a single node instead of in a cluster. Has anyone else set this up and managed to get better write performance? On 13 March 2018 at 08:28, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote: > > > On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek < > Ondrej.Valousek at s3group.com> wrote: > >> Hi, >> >> Gluster will never perform well for small files. >> >> I believe there is nothing you can do with this. >...
2018 Mar 13
0
Expected performance for WORM scenario
...legations the same way NFS does. 3. Glusterfs is FUSE based 4. Glusterfs does not support async writes Summary: If you do not need to scale out, stick with a single server (+DRBD optionally for HA), it will give you the best performance Ondrej From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] Sent: Tuesday, March 13, 2018 9:10 AM To: Ondrej Valousek <Ondrej.Valousek at s3group.com> Cc: Andreas Ericsson <andreas.ericsson at findity.com>; Gluster-users at gluster.org Subject: Re: [Gluster-users] Expected performance for WORM scenario On Tue, Mar 13, 2018 at 1...
2018 Mar 13
3
Expected performance for WORM scenario
...performance.stat-prefetch=on performance.cache-invalidation=on performance.md-cache-timeout=600 network.inode-lru-limit=50000 performance.nl-cache=on performance.nl-cache-timeout=600 network.inode-lru-limit=50000 > > > Ondrej > > > > *From:* Pranith Kumar Karampuri [mailto:pkarampu at redhat.com] > *Sent:* Tuesday, March 13, 2018 8:28 AM > *To:* Ondrej Valousek <Ondrej.Valousek at s3group.com> > *Cc:* Andreas Ericsson <andreas.ericsson at findity.com>; > Gluster-users at gluster.org > *Subject:* Re: [Gluster-users] Expected performance for WORM scen...
2017 Jun 12
3
How to remove dead peer, osrry urgent again :(
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > > On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson < > lindsay.mathieson at gmail.com> wrote: > >> On 11/06/2017 10:46 AM, WK wrote: >> > I thought you had removed vna as defective and then ADDED in vnh as >> > the replacement? >> > >> > Why is vna
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > To: "Alan Orth" <alan.orth at gmail.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > &gt...