similar to: cluster/dht: restrict migration of opened files

Displaying 20 results from an estimated 4000 matches similar to: "cluster/dht: restrict migration of opened files"

2018 Jan 18
1
[Gluster-devel] cluster/dht: restrict migration of opened files
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] > What is the impact on VM and gluster-block usecases after this patch? Will
2018 Jan 18
0
[Gluster-devel] cluster/dht: restrict migration of opened files
This does not restrict tiered migrations. Susant On 18 Jan 2018 8:18 pm, "Milind Changire" <mchangir at redhat.com> wrote: On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a
2018 Apr 05
0
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii <archon810 at gmail.com> wrote: > Hi, > > I noticed when I run gluster volume heal data info, the follow message > shows up in the log, along with other stuff: > > [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory >> selfheal failed: Unable to form layout for directory / > > > I'm
2018 Jan 30
1
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Alan Orth" <alan.orth at gmail.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Thank you,
2018 Apr 05
2
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /
Hi, I noticed when I run gluster volume heal data info, the follow message shows up in the log, along with other stuff: [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal > failed: Unable to form layout for directory / I'm seeing it on Gluster 4.0.1 and 3.13.2. Here's the full log after running heal info:
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > To: "Alan Orth" <alan.orth at gmail.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Adding
2018 Jan 29
2
[FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash
All, Krutika, Manoj and me are presenting a talk during FOSDEM'18 [1]. Please plan to attend. While we are at the event (present on 3rd and 4th), we are happy to chat with you anything related to Glusterfs. The efforts leading to this talk are captured in [2]. [1] https://fosdem.org/2018/schedule/event/optimizing_sds/ [2] https://bugzilla.redhat.com/show_bug.cgi?id=1467614 regards,
2018 Jan 30
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6? I'm also looking forward to seeing stability fixes to parallel-readdir and or readdir-ahead in 3.12.x. :) Cheers, On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > ----- Original Message ----- > > From: "Pranith Kumar Karampuri" <pkarampu at
2018 Feb 02
3
Run away memory with gluster mount
Hi Dan, It sounds like you might be running into [1]. The patch has been posted upstream and the fix should be in the next release. In the meantime, I'm afraid there is no way to get around this without restarting the process. Regards, Nithya [1]https://bugzilla.redhat.com/show_bug.cgi?id=1541264 On 2 February 2018 at 02:57, Dan Ragle <daniel at biblestuph.com> wrote: > >
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote: > Hi Dan, > > It sounds like you might be running into [1]. The patch has been posted > upstream and the fix should be in the next release. > In the meantime, I'm afraid there is no way to get around this without > restarting the process. > > Regards, > Nithya > >
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan, I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better. <snip> > In the > meantime can you remount glusterfs with options > --entry-timeout=0 and
2018 Mar 06
1
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:58 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > >> Raghavendra, >> >> I've commited my tests case to https://github.com/powool/gluster.git - >> it's grungy, and a work in progress, but I am happy to take change >>
2018 Mar 06
2
SQLite3 on 3 node cluster FS?
Raghavendra, I've commited my tests case to https://github.com/powool/gluster.git - it's grungy, and a work in progress, but I am happy to take change suggestions, especially if it will save folks significant time. For the rest, I'll reply inline below... On Mon, Mar 5, 2018 at 10:39 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > +Csaba. > > On Tue, Mar 6,
2018 Jan 29
2
Run away memory with gluster mount
On 1/29/2018 2:36 AM, Raghavendra Gowdappa wrote: > > > ----- Original Message ----- >> From: "Ravishankar N" <ravishankar at redhat.com> >> To: "Dan Ragle" <daniel at Biblestuph.com>, gluster-users at gluster.org >> Cc: "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>,
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :). This looks to be a genuine issue which requires some effort in fixing it. Can you file a bug? I need following information attached to bug: * Client and bricks logs. If you can reproduce the issue, please set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If you cannot reproduce the issue or if you cannot accommodate such big logs, please set
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba. On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > Thanks very much for your reply. > > I fixed our data corruption problem by disabling the volume > performance.write-behind flag as you suggested, and simultaneously > disabling caching in my client side mount command. > Good to know it worked. Can you give us the
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote: > > > On 2/2/2018 2:13 AM, Nithya Balachandran wrote: >> Hi Dan, >> >> It sounds like you might be running into [1]. The patch has been >> posted upstream and the fix should be in the next release. >> In the meantime, I'm afraid there is no way to get around this without >> restarting the process. >>
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra, Thanks very much for your reply. I fixed our data corruption problem by disabling the volume performance.write-behind flag as you suggested, and simultaneously disabling caching in my client side mount command. In very modest testing, the flock() case appears to me to work well - before it would corrupt the db within a few transactions. Testing using built in sqlite3 locks is
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > I've commited my tests case to https://github.com/powool/gluster.git - > it's grungy, and a work in progress, but I am happy to take change > suggestions, especially if it will save folks significant time. > > For the rest, I'll reply inline below... > > On Mon,