similar to: Shutting down a GlusterFS server.

Displaying 20 results from an estimated 10000 matches similar to: "Shutting down a GlusterFS server."

2013 Aug 21
1
FileSize changing in GlusterNodes
Hi, When I upload files into the gluster volume, it replicates all the files to both gluster nodes. But the file size slightly varies by (4-10KB), which changes the md5sum of the file. Command to check file size : du -k *. I'm using glusterFS 3.3.1 with Centos 6.4 This is creating inconsistency between the files on both the bricks. ? What is the reason for this changed file size and how can
2013 Sep 19
2
Support for GlusterFS
Hi, Is there an option to procure support for glusterfs deployment. ? As we moving into core production scenarios with glusterfs in mind, it would be slightly relieving to have this confirmation !! Thanks & Regards, Bobby Jacob P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2013 Jul 24
2
Healing in glusterfs 3.3.1
Hi, I have a glusterfs 3.3.1 setup with 2 servers and a replicated volume used by 4 clients. Sometimes from some clients I can't access some of the files. After I force a full heal on the brick I see several files healed. Is this behavior normal? Thanks -- Paulo Silva <paulojjs at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2013 Aug 22
2
Error when creating volume
Hello, I've removed a volume and I can't re-create it : gluster volume create gluster-export gluster-6:/export gluster-5:/export gluster-4:/export gluster-3:/export /export or a prefix of it is already part of a volume I've formatted the partition and reinstalled the 4 gluster servers and the error still appears. Any idea ? Thanks. -- -------------- next part --------------
2013 Jul 15
4
GlusterFS 3.4.0 and 3.3.2 released!
Hi All, 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS 3.4.0 can be downloaded from [1] and release notes are available at [2]. Upgrade instructions can be found at [3]. If you would like to propose bug fix candidates or minor features for inclusion in 3.4.1, please add them at [4]. 3.3.2 packages can be downloaded from [5]. A big note of thanks to everyone who helped in
2013 Jun 13
1
Ubuntu 12.04 and fallocate()
Hey all, Trying to use fallocate with qcow2 images to increase performance. When doing so (with OpenStack), my Gluster mountpoint goes into "Transport endpoint is not connected". I am running the Ubuntu 12.04 version of glusterfs-client/server (3.2.5-1ubuntu1) and fuse (2.8.6-2ubuntu2). Any ideas? Thanks, Jacob -------------- next part -------------- An HTML attachment was
2018 Jan 23
6
parallel-readdir is not recognized in GlusterFS 3.12.4
Hello, I saw that parallel-readdir was an experimental feature in GlusterFS version 3.10.0, became stable in version 3.11.0, and is now recommended for small file workloads in the Red Hat Gluster Storage Server documentation[2]. I've successfully enabled this on one of my volumes but I notice the following in the client mount log: [2018-01-23 10:24:24.048055] W [MSGID: 101174]
2018 Jan 29
2
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> > To: "Alan Orth" <alan.orth at gmail.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Saturday, January 27, 2018 7:31:30 AM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Adding
2018 Jan 27
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Adding devs who work on it On 23 Jan 2018 10:40 pm, "Alan Orth" <alan.orth at gmail.com> wrote: > Hello, > > I saw that parallel-readdir was an experimental feature in GlusterFS > version 3.10.0, became stable in version 3.11.0, and is now recommended for > small file workloads in the Red Hat Gluster Storage Server > documentation[2]. I've successfully
2018 Jan 30
1
parallel-readdir is not recognized in GlusterFS 3.12.4
----- Original Message ----- > From: "Alan Orth" <alan.orth at gmail.com> > To: "Raghavendra Gowdappa" <rgowdapp at redhat.com> > Cc: "gluster-users" <gluster-users at gluster.org> > Sent: Tuesday, January 30, 2018 1:37:40 PM > Subject: Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4 > > Thank you,
2018 Jan 25
2
parallel-readdir is not recognized in GlusterFS 3.12.4
By the way, on a slightly related note, I'm pretty sure either parallel-readdir or readdir-ahead has a regression in GlusterFS 3.12.x. We are running CentOS 7 with kernel-3.10.0-693.11.6.el7.x86_6. I updated my servers and clients to 3.12.4 and enabled these two options after reading about them in the 3.10.0 and 3.11.0 release notes. In the days after enabling these two options all of my
2018 Jan 30
0
parallel-readdir is not recognized in GlusterFS 3.12.4
Thank you, Raghavendra. I guess this cosmetic fix will be in 3.12.6? I'm also looking forward to seeing stability fixes to parallel-readdir and or readdir-ahead in 3.12.x. :) Cheers, On Mon, Jan 29, 2018 at 9:26 AM Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > ----- Original Message ----- > > From: "Pranith Kumar Karampuri" <pkarampu at
2023 May 04
1
Simple quota in GlusterFS 11
Dear list, I noticed that GlusterFS 11 introduces a new "simple quota" system. According to the pull request? the implementation is based on Amar's earlier proposal? and is inspired by work they did in Kadalu. Other than that I don't see any documentation or discussion about these features. Specifically, it seems that the implementation relies on the quota features of the
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote: > Dear Ravi, > > Thanks for the confirmation?I replaced a brick in a volume last night > and by the morning I see that Gluster has replicated data there, > though I don't have any indication of its progress. The `gluster v > heal volume info` and `gluster v heal volume info split-brain` are all > looking good so I guess that's
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad, I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options: $ gluster volume set homes
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote: > By the way, on a slightly related note, I'm pretty
2018 Jan 23
2
BUG: After stop and start wrong port is advertised
Hello, ? ? Will we also suffer from this regression in any of the (previously) fixed 3.10 releases? We kept 3.10 and hope to stay stable :/ Regards Jo ? ? -----Original message----- From:Atin Mukherjee <amukherj at redhat.com> Sent:Tue 23-01-2018 05:15 Subject:Re: [Gluster-users] BUG: After stop and start wrong port is advertised To:Alan Orth <alan.orth at gmail.com>; CC:Jo