Displaying 20 results from an estimated 823 matches for "glusterds".
Did you mean:
glusterd
2011 Apr 20
1
add brick unsuccessful
Hi again,
I'm having trouble testing the add-brick feature. I'm using a
replicate 2 setup with currently 2 nodes in and am
trying to add 2 more. The command blocks for a bit until i get an "Add
Brick unsuccessful" message.
According to etc-glusterfs-glusterd.vol.log below i can't seem to find
anything :
[2011-04-20 12:55:06.944593] I
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi,
I have a problem joining four Gluster 3.10 nodes to an existing
Gluster 3.8 nodes. My understanding that this should work and not be
too much of a problem.
Peer robe is successful but the node is rejected:
gluster> peer detach elkpinfglt07
peer detach: success
gluster> peer probe elkpinfglt07
peer probe: success.
gluster> peer status
Number of Peers: 6
Hostname: elkpinfglt02
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2017 Jun 20
2
trash can feature, crashed???
All,
I currently have 2 bricks running Gluster 3.10.1. This is a Centos
installation. On Friday last week, I enabled the trashcan feature on one of
my volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
> After seeing command history, I could see that you have 3 nodes, and
> firstly you are peer probing 51.15.90.60? and 163.172.151.120 from?
> 51.15.77.14
> So here itself you have 3 node cluster, after all this you are going
> on node 2 and again peer probing 51.15.77.14.
> ?Ideally it should work, with above steps, but due to some
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1.
>From the logs it's quite clear to me that node 1 is unable to resolve the
address configured in /etc/hosts where as the other nodes do. Could you
paste the gluster peer status output from all the nodes?
Also can you please check if you're able to ping "pri.ostechnix.lan" from
node1 only? Does
2018 Mar 21
2
Brick process not starting after reinstall
Hi all,
our systems have suffered a host failure in a replica three setup.
The host needed a complete reinstall. I followed the RH guide to
'replace a host with the same hostname'
(https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts).
The machine has the same OS (CentOS 7). The new machine got a minor
version number newer
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a problem joining four Gluster 3.10 nodes to an existing
> Gluster 3.8 nodes. My understanding that this should work and not be
> too much of a problem.
>
> Peer robe is successful but the node is rejected:
>
> gluster> peer detach elkpinfglt07
> peer
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover
that when one of replicate node reboot and startup the glusterd daemon,the
gluster will crash cause by the other
replicate node cpu usage reach 100%.
Our gluster info:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Options Reconfigured:
performance.cache-size: 3GB
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote:
> All,
>
> I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last
> week, I enabled the trashcan feature on one of my volumes:
> gluster volume set date01 features.trash on
I think you misspelled the volume name. Is it data01 or date01?
> I also limited the max file size to 500MB:
2018 Mar 21
0
Brick process not starting after reinstall
Could you share the following information:
1. gluster --version
2. output of gluster volume status
3. glusterd log and all brick log files from the node where bricks didn't
come up.
On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <hawk at tbi.univie.ac.at>
wrote:
> Hi all,
>
> our systems have suffered a host failure in a replica three setup.
> The host needed a
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/
On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote:
> Hello,
>
> i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All
> machines have same /etc/hosts.
>
> node1 hostname
> pri.ostechnix.lan
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list,
recently I've noted a strange behaviour of my gluster storage, sometimes
while executing a simple command like "gluster volume status
vm-images-repo" as a response I got "Another transaction is in progress
for vm-images-repo. Please try again after sometime.". This situation
does not get solved simply waiting for but I've to restart glusterd on
the node that
2018 Mar 20
0
brick processes not starting
Hi all,
our systems have suffered a node failure in a replica three setup.
The node needed a complete reinstall. I followed the RH guide to
replace a host with the same hostname
(https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts).
The machine has the same OS (CentOS 7). The new machine got a minor
version number newer gluster
2017 Aug 17
3
Glusterd not working with systemd in redhat 7
Hi Team,
I noticed that glusterd is never starting when i reboot my Redhat 7.1 server.
Service is enabled but don't works.
I tested with gluster 3.10.4 & gluster 3.10.5 and the problem still exists.
When i started the service manually this works.
I'va also tested on Redhat 6.6 server and gluster 3.10.4 and this works fine.
The problem seems to be related to Redhat 7.1
This is
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2018 Mar 06
4
Fixing a rejected peer
Hello,
So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes.
thanks,
Paolo
Il 20/07/2017 11:38, Atin Mukherjee ha scritto:
> Please share the cmd_history.log file from all the storage nodes.
>
> On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara
> <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote:
>
> Hi list,
>
> recently I've
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
I changed /etc/hosts
127.0.0.1 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
on every node matching hostname to 127.0.0.1
then
root at pri:~# apt-get purge glusterfs-server
root at pri:~# rm -rf /var/lib/glusterd/
root at pri:~# rm -rf /var/log/glusterfs/
root at pri:~# apt-get install glusterfs-server
root at pri:~#
2017 Aug 18
0
Glusterd not working with systemd in redhat 7
You're hitting a race here. By the time glusterd tries to resolve the
address of one of the remote bricks of a particular volume, the n/w
interface is not up by that time. We have fixed this issue in mainline and
3.12 branch through the following commit:
commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a
Author: Gaurav Yadav <gyadav at redhat.com>
Date: Tue Jul 18 16:23:18 2017 +0530