similar to: [Gluster-devel] Glusterfs and Structured data

Displaying 20 results from an estimated 5000 matches similar to: "[Gluster-devel] Glusterfs and Structured data"

2018 Mar 06
1
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:58 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > >> Raghavendra, >> >> I've commited my tests case to https://github.com/powool/gluster.git - >> it's grungy, and a work in progress, but I am happy to take change >>
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > I've commited my tests case to https://github.com/powool/gluster.git - > it's grungy, and a work in progress, but I am happy to take change > suggestions, especially if it will save folks significant time. > > For the rest, I'll reply inline below... > > On Mon,
2018 Jan 30
1
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Alan Orth" <alan.orth at gmail.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Thank you,
2018 Mar 06
2
SQLite3 on 3 node cluster FS?
Raghavendra, I've commited my tests case to https://github.com/powool/gluster.git - it's grungy, and a work in progress, but I am happy to take change suggestions, especially if it will save folks significant time. For the rest, I'll reply inline below... On Mon, Mar 5, 2018 at 10:39 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > +Csaba. > > On Tue, Mar 6,
2017 Jul 11
2
Extremely slow du
Hi Kashif, Thank you for your feedback! Do you have some data on the nature of performance improvement observed with 3.11 in the new setup? Adding Raghavendra and Poornima for validation of configuration and help with identifying why certain files disappeared from the mount point after enabling readdir-optimize. Regards, Vijay On 07/11/2017 11:06 AM, mohammad kashif wrote: > Hi Vijay and
2018 Feb 05
1
Run away memory with gluster mount
Hi Dan, I had a suggestion and a question in my previous response. Let us know whether the suggestion helps and please let us know about your data-set (like how many directories/files and how these directories/files are organised) to understand the problem better. <snip> > In the > meantime can you remount glusterfs with options > --entry-timeout=0 and
2018 Feb 21
1
Run away memory with gluster mount
On 2/3/2018 8:58 AM, Dan Ragle wrote: > > > On 2/2/2018 2:13 AM, Nithya Balachandran wrote: >> Hi Dan, >> >> It sounds like you might be running into [1]. The patch has been >> posted upstream and the fix should be in the next release. >> In the meantime, I'm afraid there is no way to get around this without >> restarting the process. >>
2018 Mar 06
0
SQLite3 on 3 node cluster FS?
+Csaba. On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <pha at umich.edu> wrote: > Raghavendra, > > Thanks very much for your reply. > > I fixed our data corruption problem by disabling the volume > performance.write-behind flag as you suggested, and simultaneously > disabling caching in my client side mount command. > Good to know it worked. Can you give us the
2017 Aug 08
1
Slow write times to gluster disk
Soumya, its [root at mseas-data2 ~]# glusterfs --version glusterfs 3.7.11 built on Apr 27 2016 14:09:20 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3
2018 Feb 01
0
Run away memory with gluster mount
On 1/30/2018 6:31 AM, Raghavendra Gowdappa wrote: > > > ----- Original Message ----- >> From: "Dan Ragle" <daniel at Biblestuph.com> >> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com> >> Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>,
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2018 Feb 12
0
[FOSDEM'18] Optimizing Software Defined Storage for the Age of Flash
The talk is up on youtube at: https://www.youtube.com/watch?v=0oQYPKD_kJg regards, Raghavendra On Tue, Jan 30, 2018 at 9:14 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > Note that live-streaming is available at: > https://fosdem.org/2018/schedule/streaming/ > > The talks will be archived too. > > ----- Original Message ----- > > From: "Raghavendra
2018 Jan 30
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6? I'm also looking forward to seeing stability fixes to parallel-readdir and or readdir-ahead in 3.12.x. :) Cheers, On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > ----- Original Message ----- > > From: "Pranith Kumar Karampuri" <pkarampu at
2018 Feb 03
0
Run away memory with gluster mount
On 2/2/2018 2:13 AM, Nithya Balachandran wrote: > Hi Dan, > > It sounds like you might be running into [1]. The patch has been posted > upstream and the fix should be in the next release. > In the meantime, I'm afraid there is no way to get around this without > restarting the process. > > Regards, > Nithya > >
2018 Jan 30
1
Run away memory with gluster mount
----- Original Message ----- > From: "Dan Ragle" <daniel at Biblestuph.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>, "Ravishankar N" <ravishankar at redhat.com> > Cc: gluster-users at gluster.org, "Csaba Henk" <chenk at redhat.com>, "Niels de Vos" <ndevos at redhat.com>, "Nithya >
2018 Mar 05
6
SQLite3 on 3 node cluster FS?
Raghavendra, Thanks very much for your reply. I fixed our data corruption problem by disabling the volume performance.write-behind flag as you suggested, and simultaneously disabling caching in my client side mount command. In very modest testing, the flock() case appears to me to work well - before it would corrupt the db within a few transactions. Testing using built in sqlite3 locks is
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi Raghavendra, > On 20 Mar 2018, at 1:55 pm, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > Aggregating large number of small writes by write-behind into large writes has been merged on master: > https://github.com/gluster/glusterfs/issues/364 <https://github.com/gluster/glusterfs/issues/364> > > Would like to know whether it helps for this usecase.
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message ----- > From: "Pat Haley" <phaley at mit.edu> > To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2017 Jun 18
1
Extremely slow du
Hi Mohammad, A lot of time is being spent in addressing metadata calls as expected. Can you consider testing out with 3.11 with md-cache [1] and readdirp [2] improvements? Adding Poornima and Raghavendra who worked on these enhancements to help out further. Thanks, Vijay [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ [2] https://github.com/gluster/glusterfs/issues/166 On
2018 Jan 18
1
[Gluster-devel] cluster/dht: restrict migration of opened files
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > All, > > Patch [1] prevents migration of opened files during rebalance operation. > If patch [1] affects you, please voice out your concerns. [1] is a stop-gap > fix for the problem discussed in issues [2][3] > What is the impact on VM and gluster-block usecases after this patch? Will