similar to: Documentation on readdir performance

Displaying 20 results from an estimated 10000 matches similar to: "Documentation on readdir performance"

2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote: > By the way, on a slightly related note, I'm pretty
2018 Jan 30
1
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Alan Orth" <alan.orth at gmail.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Thank you,
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad, I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options: $ gluster volume set homes
2018 Jan 30
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6? I'm also looking forward to seeing stability fixes to parallel-readdir and or readdir-ahead in 3.12.x. :) Cheers, On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > ----- Original Message ----- > > From: "Pranith Kumar Karampuri" <pkarampu at
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > To: "Alan Orth" <alan.orth at gmail.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Adding
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :). This looks to be a genuine issue which requires some effort in fixing it. Can you file a bug? I need following information attached to bug: * Client and bricks logs. If you can reproduce the issue, please set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If you cannot reproduce the issue or if you cannot accommodate such big logs, please set
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6. I updated my servers and clients to 3.12.4 and enabled these two options after reading about them in the 3.10.0 and 3.11.0 release notes. In the days after enabling these two options all of my
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra, Sorry for the late follow up. I have some more data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048]
2017 Jun 01
0
FW: ATTN: nbalacha IRC - Gluster - BlackoutWNCT requested info for 0byte file issue
Hey Nithya, root at PB-WA-AA-00-A:/# glusterfs -V glusterfs 3.10.1 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian, Do you've a reproducer for this bug? If not a specific one, a general outline of what operations where done on the file will help. regards, Raghavendra On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> > wrote: > >> The gfid mismatch
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan, I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better. <snip> > In the > meantime can you remount glusterfs with options > --entry-timeout=0 and
2018 Jan 18
1
[Gluster-devel] cluster/dht: restrict migration of opened files
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] > What is the impact on VM and gluster-block usecases after this patch? Will
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote: > > > On 2/2/2018 2:13 AM, Nithya Balachandran wrote: >> Hi Dan, >> >> It sounds like you might be running into [1]. The patch has been >> posted upstream and the fix should be in the next release. >> In the meantime, I'm afraid there is no way to get around this without >> restarting the process. >>
2018 Feb 01
0
Run away memory with gluster mount
On 1/30/2018 6:31 AM, Raghavendra Gowdappa wrote: > > > ----- Original Message ----- >> From: "Dan Ragle" <daniel at Biblestuph.com> >> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com> >> Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>,
2018 Jan 16
2
cluster/dht: restrict migration of opened files
All, Patch [1] prevents migration of opened files during rebalance operation. If patch [1] affects you, please voice out your concerns. [1] is a stop-gap fix for the problem discussed in issues [2][3] [1] https://review.gluster.org/#/c/19202/ [2] https://github.com/gluster/glusterfs/issues/308 [3] https://github.com/gluster/glusterfs/issues/347 regards, Raghavendra -------------- next part
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote: > Hi Dan, > > It sounds like you might be running into [1]. The patch has been posted > upstream and the fix should be in the next release. > In the meantime, I'm afraid there is no way to get around this without > restarting the process. > > Regards, > Nithya > >
2018 Jan 10
1
Exact purpose of network.ping-timeout
----- Original Message ----- > From: "Raghavendra Gowdappa" <rgowdapp at redhat.com> > To: "Omar Kohl" <omar.kohl at iternity.com> > Cc: gluster-users at gluster.org > Sent: Wednesday, January 10, 2018 10:56:21 AM > Subject: Re: [Gluster-users] Exact purpose of network.ping-timeout > > Sorry about the delayed response. Had to dig into the
2018 Feb 09
0
[Gluster-devel] Glusterfs and Structured data
+gluster-users Another guideline we can provide is to disable all performance xlators for workloads requiring strict metadata consistency (even for non gluster-block usecases like native fuse mount etc). Note that we might still can have few perf xlators turned on. But, that will require some experimentation. The safest and easiest would be to turn off following xlators: * performance.read-ahead
2018 Feb 02
3
Run away memory with gluster mount
Hi Dan, It sounds like you might be running into [1]. The patch has been posted upstream and the fix should be in the next release. In the meantime, I'm afraid there is no way to get around this without restarting the process. Regards, Nithya [1]https://bugzilla.redhat.com/show_bug.cgi?id=1541264 On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com> wrote: > >