similar to: strange hostname issue on volume create command with famous Peer in Cluster state error message

Displaying 20 results from an estimated 3000 matches similar to: "strange hostname issue on volume create command with famous Peer in Cluster state error message"

2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
I'm guessing there's something wrong w.r.t address resolution on node 1. >From the logs it's quite clear to me that node 1 is unable to resolve the address configured in /etc/hosts where as the other nodes do. Could you paste the gluster peer status output from all the nodes? Also can you please check if you're able to ping "pri.ostechnix.lan" from node1 only? Does
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
I changed /etc/hosts 127.0.0.1 pri.ostechnix.lan pri 51.15.90.60 sec.ostechnix.lan sec 163.172.151.120 third.ostechnix.lan third on every node matching hostname to 127.0.0.1 then root at pri:~# apt-get purge glusterfs-server root at pri:~# rm -rf /var/lib/glusterd/ root at pri:~# rm -rf /var/log/glusterfs/ root at pri:~# apt-get install glusterfs-server root at pri:~#
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
Did you do gluster peer probe? Check out the documentation: http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/ On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote: > Hello, > > i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All > machines have same /etc/hosts. > > node1 hostname > pri.ostechnix.lan
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: > After seeing command history, I could see that you have 3 nodes, and > firstly you are peer probing 51.15.90.60? and 163.172.151.120 from? > 51.15.77.14 > So here itself you have 3 node cluster, after all this you are going > on node 2 and again peer probing 51.15.77.14. > ?Ideally it should work, with above steps, but due to some
2018 Feb 07
2
Ip based peer probe volume create error
Hi, i attached logs. node3_cmd_history.log is empty because i did not run any command on node3. At logs i saw node 1 peer status contains ip addresses as peer. But other nodes node1 visible at it?s name . node 1 peer status is root at pri:/var/log/glusterfs# gluster peer status Number of Peers: 2 Hostname: 51.15.90.60 Uuid: eed0d6c6-90ef-4705-b3f9-0b028d769df3 State: Peer in Cluster
2017 Nov 30
2
Problems joining new gluster 3.10 nodes to existing 3.8
Hi, I have a problem joining four Gluster 3.10 nodes to an existing Gluster 3.8 nodes. My understanding that this should work and not be too much of a problem. Peer robe is successful but the node is rejected: gluster> peer detach elkpinfglt07 peer detach: success gluster> peer probe elkpinfglt07 peer probe: success. gluster> peer status Number of Peers: 6 Hostname: elkpinfglt02
2017 Dec 01
0
Problems joining new gluster 3.10 nodes to existing 3.8
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> wrote: > Hi, > > I have a problem joining four Gluster 3.10 nodes to an existing > Gluster 3.8 nodes. My understanding that this should work and not be > too much of a problem. > > Peer robe is successful but the node is rejected: > > gluster> peer detach elkpinfglt07 > peer
2018 Feb 07
0
Ip based peer probe volume create error
>From glusterd.logs, It looks like address resolution issue in "glusterd_brickinfo_new_from_brick" functon call. After seeing command history, I could see that you have 3 nodes, and firstly you are peer probing 51.15.90.60 and 163.172.151.120 from 51.15.77.14 So here itself you have 3 node cluster, after all this you are going on node 2 and again peer probing 51.15.77.14. Ideally
2013 Mar 11
12
Postfix setup
Dear All I am planning to setup mail server for my domain. Which one is preferred postfix or sendmail. I came across a link * http://ostechnix.wordpress.com/2013/02/08/setup-mail-server-using-postfixdovecotsquirrelmail-in-centosrhelscientific-linux-6-3-step-by-step/ * for postfix mail setup. It says, Prerequisites: - The mail server should contain a valid MX record in the DNS server.
2018 Feb 07
2
Ip based peer probe volume create error
Hello, i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if peer probe with hostname. But ip based probe it?s not working. What?s the correct /etc/hosts and hostname values when ip based peer probe. Do i need still need peer FQDN names on /etc/hosts . i need advice how can fix this issue. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 Feb 07
0
Ip based peer probe volume create error
Hi, Could you please send me the glusterd.logs and cmd-history logs from all the nodes. Thanks Gaurav On Wed, Feb 7, 2018 at 5:13 PM, Ercan Aydo?an <ercan.aydogan at gmail.com> wrote: > Hello, > > i have dedicated 3.11.3 version glusterfs 3 nodes. i can create volumes if > peer probe with hostname. > > But ip based probe it?s not working. > > What?s the correct
2018 Aug 28
4
OpenLDAP support in future versions of CentOS
Hello! I just joined this mailing list, so I apologize in advance if this topic has already been covered. Red Hat and Suse announced they are no longer supporting OpenLDAP in future releases. https://www.ostechnix.com/redhat-and-suse-announced-to- withdraw-support-for-openldap/ However, we mainly use CentOS and while it's a RH derivative, I wanted to find out what CentOS plans on doing in
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I need to make sure it stays up or schedule some downtime if it doesn't doesn't. Thanks. On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > > On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> > wrote: >> >> Hi, >>
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com> wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. > > It actually began as the same problem with a different peer. I noticed > with (call it) gluster-2, when I couldn't make a new volume. I compared > /var/lib/glusterd between them, and
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the info file in shchhv01. As per the code, this field should be written into the glusterd store if the op-version is >= 30706 . What I am guessing is since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on op-version bump up" in 3.8.4 while bumping up the op-version the info and volfiles were
2018 Feb 19
2
Upgrade from 3.8.15 to 3.12.5
Hi, I have a 3 node cluster (Found1, Found2, Found2) which i wanted to upgrade I upgraded one node from 3.8.15 to 3.12.5 and now i am having multiple problems with the install. The 2 nodes not upgraded are still working fine(Found1,2) but the one upgraded has Peer Rejected (Connected) when peer status is run but it also has multiple brick that have "Transport endpoint is not connected"
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem. Current: 3.8.4 Volume Name: shchst01 Type: Distributed-Replicate Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3 Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: shchhv01-sto:/data/brick3/shchst01 Brick2: shchhv02-sto:/data/brick3/shchst01 Brick3:
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in "/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with glusterd.logs and command-history. Thanks Gaurav On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas, > same old same > in log of the probing peer I see: > ... > 2017-08-29