similar to: 'ERROR: parsing the volfile failed' on fresh install

Displaying 20 results from an estimated 6000 matches similar to: "'ERROR: parsing the volfile failed' on fresh install"

2017 Dec 15
0
'ERROR: parsing the volfile failed' on fresh install
glusterd.vol file is default installed with glusterfs package. So I'm not sure how did you end up into a state where this file was missing (as per my initial investigation looking at the logs). What you could do to check if this file is really missing or not by running "find / -iname glusterd.vol" and if it's indeed missing then copy it from one of the peer node and restart
2013 Jul 07
1
Getting ERROR: parsing the volfile failed (No such file or directory) when starting glusterd on Fedora 19
I don't get this. I am using a freshly installed copy of Fedora 19 and starting up glusterd for the first time. The goal is to have a replicated directory on two systems. But for right now, I can't even start up the glusterd daemon right out of the box. Trying to follow the Quick Start directions at http://gluster.org/community/documentation/index.php/QuickStart is, well, challenging.
2017 Oct 19
1
gluster + synology
On Wed, Oct 18, 2017 at 12:45 PM, Alex Chekholko <alex at calicolabs.com> wrote: > In theory, you can run GlusterFS on a Synology box, as it is "just a Linux > box". In practice, you might be the first person to ever try it. > Not sure if this would be the first attempt. Synology seems to bundle glusterfs in some form [1]. Regards, Vijay [1]
2017 Oct 18
2
gluster + synology
Hi, Does anyone have any experience of using Synology NAS servers as bricks in a gluster setup? The ops team at my work prefers Synology since that is what they are already using and some of the nice out-of-the-box admin features. From what I can tell Synology runs a custom linux flavor so it should be possible to compile gluster on it. Any first hand experience with it? Thanks, Ben
2017 Oct 18
0
gluster + synology
In theory, you can run GlusterFS on a Synology box, as it is "just a Linux box". In practice, you might be the first person to ever try it. On Tue, Oct 17, 2017 at 8:45 PM, Ben Mabey <ben.mabey at recursionpharma.com> wrote: > Hi, > Does anyone have any experience of using Synology NAS servers as bricks in > a gluster setup? The ops team at my work prefers Synology
2017 Dec 13
1
Consultants?
Hi all, On the gluster website it links to a page of consultants that 404s: https://www.gluster.org/consultants/ <https://www.gluster.org/consultants/> Does anyone know of an actual list of consultants that offer gluster support? Or are there any on this mailing list? Thanks, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL:
2023 Feb 23
1
Big problems after update to 9.6
Hello, We have a cluster with two nodes, "sg" and "br", which were running GlusterFS 9.1, installed via the Ubuntu package manager. We updated the Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br" node is still on version 9.1. Running "gluster volume status" on either host gives "Error : Request timed out". On
2018 May 15
2
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
hi, i noticed that this repo for glusterfs 3.13 does not exists anymore at: http://mirror.centos.org/centos/7/storage/x86_64/ i knew was not going to be long term supported however the downgrade to 3.12 breaks the server node i believe the issue is with: *[2018-05-15 08:54:39.981101] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management'
2018 May 15
0
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
You can still get them from https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ (I don't know how much longer they'll be there. I suggest you copy them if you think you're going to need them in the future.) n 05/15/2018 04:58 AM, Davide Obbi wrote: > hi, > > i noticed that this repo for glusterfs 3.13 does not exists anymore at: > >
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
Thanks Kaleb, any chance i can make the node working after the downgrade? thanks On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > > You can still get them from > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ > > (I don't know how much longer they'll be there. I suggest you copy them > if you think
2011 Jul 08
1
Possible to bind to multiple addresses?
I am trying to run GlusterFS on only my internal interfaces. I have setup two bricks and have a replicated volume that is started. Everything works fine when I run with no transport.socket.bind-address defined in the /etc/glusterfs/glusterd.vol file, but when I add it I get: Transport endpoint is not connected My configuration looks like this: volume management type mgmt/glusterd
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
On 05/15/2018 08:08 AM, Davide Obbi wrote: > Thanks Kaleb, > > any chance i can make the node working after the downgrade? > thanks Without knowing what doesn't work, I'll go out on a limb and guess that it's an op-version problem. Shut down your 3.13 nodes, change their op-version to one of the valid 3.12 op-versions (e.g. 31203) and restart. Then the 3.12 nodes should
2017 Aug 06
0
State: Peer Rejected (Connected)
On 2017? 08? 06? 15:59, mabi wrote: > Hi, > > I have a 3 nodes replica (including arbiter) volume with GlusterFS > 3.8.11 and this night one of my nodes (node1) had an out of memory for > some unknown reason and as such the Linux OOM killer has killed the > glusterd and glusterfs process. I restarted the glusterd process but > now that node is in "Peer Rejected"
2023 Feb 24
1
Big problems after update to 9.6
Hi David, It seems like a network issue to me, As it's unable to connect the other node and getting timeout. Few things you can check- * Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node. * Are you binding gluster on any specific IP, which is changed after your update. * Check if you can access port 24007 from the other host. If
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon, Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1. So what I now
2017 Oct 17
2
Gluster processes remaining after stopping glusterd
Hi, I noticed that when i stop my gluster server via systemctl stop glusterd command , one glusterfs process is still up. Which is the correct way to stop all gluster processes in my host? That's we see after run the command: *************************************************************************************************** [root at xxxxxx ~]# ps -ef | grep -i glu root 1825 1
2017 Oct 18
0
Gluster processes remaining after stopping glusterd
On Tue, Oct 17, 2017 at 3:28 PM, ismael mondiu <mondiu at hotmail.com> wrote: > Hi, > > I noticed that when i stop my gluster server via systemctl stop glusterd > command , one glusterfs process is still up. > > Which is the correct way to stop all gluster processes in my host? > Stopping glusterd service doesn't bring down any other services than glusterd process.
2017 May 09
1
Empty info file preventing glusterd from starting
Hi Atin/Team, We are using gluster-3.7.6 with setup of two brick and while restart of system I have seen that the glusterd daemon is getting failed from start. At the time of analyzing the logs from etc-glusterfs.......log file I have received the below logs [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version
2013 Oct 07
1
glusterd service fails to start on one peer
I'm hoping that someone here can point me the right direction to help me solve a problem I am having. I've got 3 gluster peers and for some reason glusterd sill not start on one of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4 (2.6.32-358.el6.x86_64). In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error repeated 36 times (alternating between brick-0
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner. Is there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service