Displaying 20 results from an estimated 10000 matches similar to: "Gluster-users Digest, Vol 41, Issue 16"
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi,
I have a problem joining four Gluster 3.10 nodes to an existing
Gluster 3.8 nodes. My understanding that this should work and not be
too much of a problem.
Peer robe is successful but the node is rejected:
gluster> peer detach elkpinfglt07
peer detach: success
gluster> peer probe elkpinfglt07
peer probe: success.
gluster> peer status
Number of Peers: 6
Hostname: elkpinfglt02
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com>
wrote:
> Hi,
>
> I have a problem joining four Gluster 3.10 nodes to an existing
> Gluster 3.8 nodes. My understanding that this should work and not be
> too much of a problem.
>
> Peer robe is successful but the node is rejected:
>
> gluster> peer detach elkpinfglt07
> peer
2011 Jan 09
1
gluster peer probe
Hello everyone,
So this is my first email here. Recently I have downloaded glusterfs-3.1 and
tried to install it on my four servers. I did not have any problems with the
installation, but the configuration is a little bit unclear to me. I would
like to ask you, if perhaps you had the same problems as I encountered and
if yes, then how did you resolve your problem ?
My configuration : 5 servers
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1.
>From the logs it's quite clear to me that node 1 is unable to resolve the
address configured in /etc/hosts where as the other nodes do. Could you
paste the gluster peer status output from all the nodes?
Also can you please check if you're able to ping "pri.ostechnix.lan" from
node1 only? Does
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/
On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote:
> Hello,
>
> i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All
> machines have same /etc/hosts.
>
> node1 hostname
> pri.ostechnix.lan
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
I changed /etc/hosts
127.0.0.1 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
on every node matching hostname to 127.0.0.1
then
root at pri:~# apt-get purge glusterfs-server
root at pri:~# rm -rf /var/lib/glusterd/
root at pri:~# rm -rf /var/log/glusterfs/
root at pri:~# apt-get install glusterfs-server
root at pri:~#
2011 Jul 08
1
Possible to bind to multiple addresses?
I am trying to run GlusterFS on only my internal interfaces. I have
setup two bricks and have a replicated volume that is started.
Everything works fine when I run with no transport.socket.bind-address
defined in the /etc/glusterfs/glusterd.vol file, but when I add it I get:
Transport endpoint is not connected
My configuration looks like this:
volume management
type mgmt/glusterd
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover
that when one of replicate node reboot and startup the glusterd daemon,the
gluster will crash cause by the other
replicate node cpu usage reach 100%.
Our gluster info:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Options Reconfigured:
performance.cache-size: 3GB
2011 Sep 06
1
Inconsistent md5sum of replicated file
I was wondering if anyone would be able to shed some light on how a file
could end up with inconsistent md5sums on Gluster backend storage.
Our configuration is running on Gluster v3.1.5 in a distribute-replicate
setup consisting of 8 bricks.
Our OS is Red Hat 5.6 x86_64. Backend storage is an ext3 RAID 5.
The 8 bricks are in RR DNS and are mounted for reading/writing via NFS
automounts.
2011 Jan 12
1
Setting up 3.1
[Repost - last time this didn't seem to work]
I've been running gluster for a couple of years, so I'm quite used to 3.0.x and earlier. I'm looking to upgrade to 3.1.1 for more stability (I'm getting frequent 'file has vanished' errors when rsyncing from 3.0.6) on a bog-standard 2-node dist/rep config. So far it's not going well. I'm running on Ubuntu Lucid x64
2017 Jun 14
2
gluster peer probe failing
Hi,
I have a gluster (version 3.10.2) server running on a 3 node (centos7) cluster.
Firewalld and SELinux are disabled, and I see I can telnet from each node to the other on port 24007.
When I try to create the first peering by running on node1 the command:
gluster peer probe <node2 ip address>
I get the error:
"Connection failed. Please check if gluster daemon is operational."
2017 Jun 20
2
gluster peer probe failing
Hi,
I have tried on my host by setting corresponding ports, but I didn't see
the issue on my machine locally.
However with the logs you have sent it is prety much clear issue is related
to ports only.
I will trying to reproduce on some other machine. Will update you as s0on
as possible.
Thanks
Gaurav
On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote:
>
2017 Jun 15
0
gluster peer probe failing
https://review.gluster.org/#/c/17494/ will it and the next update of 3.10
should have this fix.
If sysctl net.ipv4.ip_local_reserved_ports has any value > short int
range then this would be a problem with the current version.
Would you be able to reset the reserved ports temporarily to get this going?
On Wed, Jun 14, 2017 at 8:32 PM, Guy Cukierman <guyc at elminda.com> wrote:
>
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are:
net.ipv4.ip_local_reserved_ports = 30000-32767
net.ipv4.ip_local_port_range = 32768 60999
meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue?
From: Atin Mukherjee [mailto:amukherj at redhat.com]
Sent: Thursday, June 15, 2017 10:56 AM
To: Guy Cukierman <guyc at elminda.com>
Cc:
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl
net.ipv4.ip_local_reserved_ports".
Apart from output of command please send the logs to look into the issue.
Thanks
Gaurav
On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> +Gaurav, he is the author of the patch, can you please comment here?
>
>
> On Thu, Jun 15, 2017 at 3:28
2017 Jun 20
0
gluster peer probe failing
Hi,
I am able to recreate the issue and here is my RCA.
Maximum value i.e 32767 is being overflowed while doing manipulation on it
and it was previously not taken care properly.
Hence glusterd was crashing with SIGSEGV.
Issue is being fixed with "
https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported
as well.
Thanks
Gaurav
On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 18
0
gluster peer probe failing
Hi,
Below please find the reserved ports and log, thanks.
sysctl net.ipv4.ip_local_reserved_ports:
net.ipv4.ip_local_reserved_ports = 30000-32767
glusterd.log:
[2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007
[2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
> After seeing command history, I could see that you have 3 nodes, and
> firstly you are peer probing 51.15.90.60? and 163.172.151.120 from?
> 51.15.77.14
> So here itself you have 3 node cluster, after all this you are going
> on node 2 and again peer probing 51.15.77.14.
> ?Ideally it should work, with above steps, but due to some
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav!
1. Any time estimation on to when this fix would be released?
2. Any recommended workaround?
Best,
Guy.
From: Gaurav Yadav [mailto:gyadav at redhat.com]
Sent: Tuesday, June 20, 2017 9:46 AM
To: Guy Cukierman <guyc at elminda.com>
Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] gluster peer probe failing