similar to: peer rejected but connected

Displaying 20 results from an estimated 6000 matches similar to: "peer rejected but connected"

2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in "/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with glusterd.logs and command-history. Thanks Gaurav On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas, > same old same > in log of the probing peer I see: > ... > 2017-08-29
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and gluster peer status. Apart from above info, please provide glusterd logs, cmd_history.log. Thanks Gaurav On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 vols. > What I see, unfortunately, is one brick
2018 Mar 06
4
Fixing a rejected peer
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options in one of the vols differed. (I suspect this was due to attempting to create the volume via the
2018 Mar 06
0
Fixing a rejected peer
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawrence at squaretrade.com> wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. > > It actually began as the same problem with a different peer. I noticed > with (call it) gluster-2, when I couldn't make a new volume. I compared > /var/lib/glusterd between them, and
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the info file in shchhv01. As per the code, this field should be written into the glusterd store if the op-version is >= 30706 . What I am guessing is since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on op-version bump up" in 3.8.4 while bumping up the op-version the info and volfiles were
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I need to make sure it stays up or schedule some downtime if it doesn't doesn't. Thanks. On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > > On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> > wrote: >> >> Hi, >>
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem. Current: 3.8.4 Volume Name: shchst01 Type: Distributed-Replicate Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3 Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: shchhv01-sto:/data/brick3/shchst01 Brick2: shchhv02-sto:/data/brick3/shchst01 Brick3:
2018 Mar 06
2
Fixing a rejected peer
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > I'm tempted to repeat - down things, copy the checksum the "good" ones agree on, start things; but given that this has turned into a balloon-squeezing exercise, I want to make sure I'm not doing this the wrong way. > > Yes, that's the way. Copy
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
Yes Atin. I'll take a look. On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > Looks like a bug as I see tier-enabled = 0 is an additional entry in the > info file in shchhv01. As per the code, this field should be written into > the glusterd store if the op-version is >= 30706 . What I am guessing is > since we didn't have the commit
2017 Sep 12
2
one brick one volume process dies?
hi everyone I have 3-peer cluster with all vols in replica mode, 9 vols. What I see, unfortunately, is one brick fails in one vol, when it happens it's always the same vol on the same brick. Command: gluster vol status $vol - would show brick not online. Restarting glusterd with systemclt does not help, only system reboot seem to help, until it happens, next time. How to troubleshoot this
2018 Feb 19
2
Upgrade from 3.8.15 to 3.12.5
Hi, I have a 3 node cluster (Found1, Found2, Found2) which i wanted to upgrade I upgraded one node from 3.8.15 to 3.12.5 and now i am having multiple problems with the install. The 2 nodes not upgraded are still working fine(Found1,2) but the one upgraded has Peer Rejected (Connected) when peer status is run but it also has multiple brick that have "Transport endpoint is not connected"
2024 Jan 15
2
Upgrade 10.4 -> 11.1 making problems
Hi, just upgraded some gluster servers from version 10.4 to version 11.1. Debian bullseye & bookworm. When only installing the packages: good, servers, volumes etc. work as expected. But one needs to test if the systems work after a daemon and/or server restart. Well, did a reboot, and after that the rebooted/restarted system is "out". Log message from working node: [2024-01-15
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
I believe the peer rejected issue is something we recently identified and has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637 and is available in 3.12.6. I'd request you to upgrade to the latest version in 3.12 series. On Mon, Feb 19, 2018 at 12:27 PM, <rwecker at ssd.org> wrote: > Hi, > > I have a 3 node cluster (Found1, Found2, Found2) which i wanted
2024 Jan 17
2
Upgrade 10.4 -> 11.1 making problems
ok, finally managed to get all servers, volumes etc runnung, but took a couple of restarts, cksum checks etc. One problem: a volume doesn't heal automatically or doesn't heal at all. gluster volume status Status of volume: workdata Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------