similar to: [ovirt-users] Gluster issue with /var/lib/glusterd/peers/<ip> file

Displaying 20 results from an estimated 2000 matches similar to: "[ovirt-users] Gluster issue with /var/lib/glusterd/peers/<ip> file"

2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > And what does glusterd log indicate for these failures? > See here in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing It seems that on each host the peer files have been updated with a new entry "hostname2": [root at ovirt01 ~]# cat
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
OK, so the log just hints to the following: [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node [2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases] 0-management: Commit Op Failed While going through the code,
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
And what does glusterd log indicate for these failures? On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi < >> gianluca.cecchi at gmail.com> wrote: >> >>>
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin, I've gotten around to this and was able to get upgrade done using 3.7.0 before moving to 3.11. For some reason 3.7.9 wasn't working well. On 3.11 though I notice that gluster/nfs is really made optional and nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha on new clusters but would like to have glusterfs-gnfs on existing clusters so a seamless upgrade
2014 Apr 28
2
volume start causes glusterd to core dump in 3.5.0
I just built a pair of AWS Red Hat 6.5 instances to create a gluster replicated pair file system. I can install everything, peer probe, and create the volume, but as soon as I try to start the volume, glusterd dumps core. The tail of the log after the crash: +------------------------------------------------------------------------------+ [2014-04-28 21:49:18.102981] I
2018 Feb 26
0
rpc/glusterd-locks error
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes all show the following in glusterd.log: [2018-02-26 14:31:47.330670] E
2012 Dec 03
1
"gluster peer status" messed up
I have three machines, all Ubuntu 12.04 running gluster 3.3.1. storage1 192.168.6.70 on 10G, 192.168.5.70 on 1G storage2 192.168.6.71 on 10G, 192.168.5.71 on 1G storage3 192.168.6.72 on 10G, 192.168.5.72 on 1G Each machine has two NICs, but on each host, /etc/hosts lists the 10G interface on all machines. storage1 and storage3 were taken away for hardware changes, which included
2017 Aug 22
0
Glusterd proccess hangs on reboot
As an addition perf top shows %80 libc-2.12.so __strcmp_sse42 during glusterd %100 cpu usage Hope this helps... On Tue, Aug 22, 2017 at 2:41 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi there, > > I have a strange problem. > Gluster version in 3.10.5, I am testing new servers. Gluster > configuration is 16+4 EC, I have three volumes, each have 1600 bricks. > I
2020 Oct 02
0
Centos8: Glusterd do not start correctly when I startup or reboot all server together
The systemd glusterd.service unit modify do not resolve my problem The solution is mount the glusterfs volume with this line into /etc/fstab: virt2:/gfsvol2 /virt-gfs glusterfs defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-timeout=20,x-systemd.requires=glusterd.service 0 0 run systemctl daemon-reload and run this for mount the volume systemctl restart virt\\x2dgfs.mount
2014 Mar 04
1
glusterd service fails to start from AWS AMI
Hello all. I have a working replica 2 cluster (4 nodes) up and running happily over Amazon EC2. My end goal is to create AMIs of each machine and then quickly reproduce the same, but new, cluster from those AMIs. Essentially, I'd like a cluster "template". -Assigned original instances' Elastic IPs to new machines to reduce resolution issues. -Passwordless SSH works on initial
2017 Aug 22
2
Glusterd proccess hangs on reboot
Hi there, I have a strange problem. Gluster version in 3.10.5, I am testing new servers. Gluster configuration is 16+4 EC, I have three volumes, each have 1600 bricks. I can successfully create the cluster and volumes without any problems. I write data to cluster from 100 clients for 12 hours again no problem. But when I try to reboot a node, glusterd process hangs on %100 CPU usage and seems to
2013 Oct 07
1
glusterd service fails to start on one peer
I'm hoping that someone here can point me the right direction to help me solve a problem I am having. I've got 3 gluster peers and for some reason glusterd sill not start on one of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4 (2.6.32-358.el6.x86_64). In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error repeated 36 times (alternating between brick-0
2013 Jan 15
2
1024 char limit for auth.allow and automatically re-reading auth.allow without having to restart glusterd?
Hi, Anyone know if the 1024 char limit for auth.allow still exists in the latest production version (seems to be there in 3.2.5). Also anyone know if the new versions check if auth.allow has been updated without having to restart glusterd? Is there anyway to restart glusterd without killing it and restarting the process, is kill -1 (HUP) possible with it (also with the version i'm running?)
2017 May 09
1
Empty info file preventing glusterd from starting
Hi Atin/Team, We are using gluster-3.7.6 with setup of two brick and while restart of system I have seen that the glusterd daemon is getting failed from start. At the time of analyzing the logs from etc-glusterfs.......log file I have received the below logs [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version
2017 Sep 20
2
hostname
Hi, how to change the host name of gluster servers? if I modify the hostname1 in /etc/lib/glusterd/peers/uuid, the change is not save... gluster pool list return ipserver and not new hostname... Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170920/c5b95a89/attachment.html>
2017 Aug 02
2
glusterd daemon - restart
Can the glusterd daemon be restarted on all storage nodes without causing any disruption to data being served or the cluster in general? I am running gluster 3.2 using distributed replica 2 volumes with fuse clients. Regards, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Sep 14
1
Glusterd proccess hangs on reboot
Hi Serkan, I was wondering if you resolved your issue with the high CPU usage and hang after starting gluster? I'm setting up a 3 server (replica 3, arbiter 1), 300 volume, Gluster 3.12 cluster on CentOS 7 and am having what looks to be exactly the same issue as you. With no volumes created CPU usage / load is normal, but after creating all the volumes even with no data CPU and RAM usage
2017 Oct 18
0
Gluster processes remaining after stopping glusterd
On Tue, Oct 17, 2017 at 3:28 PM, ismael mondiu <mondiu at hotmail.com> wrote: > Hi, > > I noticed that when i stop my gluster server via systemctl stop glusterd > command , one glusterfs process is still up. > > Which is the correct way to stop all gluster processes in my host? > Stopping glusterd service doesn't bring down any other services than glusterd process.
2020 Sep 28
1
Centos8: Glusterd do not start correctly when I startup or reboot all server together
I have install and configure on two server centos8 glusterfs in replica mode in this manner: dnf install centos-release-gluster -y dnf install glusterfs-server glusterfs glusterfs-fuse -y systemctl enable --now glusterd gluster peer probe virt1 gluster peer status sh creavolume.sh gfsvol1 301G /gfsvol1 xfs # NOTE: this is a my shell script to create fs on lvm mkdir
2013 Aug 30
1
cli & glusterd sm develop guide
hi, I want develop a cli cmd to create snapshot, but glusterd op sm & hook looks complex, Are there development guides or docs about cli & glusterd backend process? Thanks. --terrs