Displaying 20 results from an estimated 8000 matches similar to: "peer rejected but connected"
2017 Sep 07
0
peer rejected but connected
Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even though....
>
> and then I asked:
>
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2017 Oct 06
1
Glusterd not working with systemd in redhat 7
On 10/04/2017 06:17 AM, Niels de Vos wrote:
> On Wed, Oct 04, 2017 at 09:44:44AM +0000, ismael mondiu wrote:
>> Hello,
>>
>> I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6.
>>
>> It's hard to find upgrade guides for a minor release. Can you help me please ?
>
>
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello,
I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6.
It's hard to find upgrade guides for a minor release. Can you help me please ?
Thanks in advance
Ismael
________________________________
De : Atin Mukherjee <amukherj at redhat.com>
Envoy? : dimanche 17 septembre 2017 14:56
? : ismael
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 17
2
Glusterd not working with systemd in redhat 7
The backport just got merged few minutes back and this fix should be
available in next update of 3.10.
On Fri, Sep 15, 2017 at 2:08 PM, ismael mondiu <mondiu at hotmail.com> wrote:
> Hello Team,
>
> Do you know when the backport to 3.10 will be available ?
>
> Thanks
>
>
>
>
> ------------------------------
> *De :* Atin Mukherjee <amukherj at
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
So I have the root cause. Basically as part of the patch we write the
brickinfo->uuid in to the brickinfo file only when there is a change in the
volume. As per the brickinfo files you shared the uuid was not saved as
there is no new change in the volume and hence the uuid was always NULL in
the resolve brick because of which glusterd went for local address
resolution. Having this done with a
2017 Aug 18
1
Glusterd not working with systemd in redhat 7
On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote:
> You're hitting a race here. By the time glusterd tries to resolve the
> address of one of the remote bricks of a particular volume, the n/w
> interface is not up by that time. We have fixed this issue in mainline and
> 3.12 branch through the following commit:
We still maintain 3.10 for at least 6 months. It
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Thanks Niels,
We want to install it on redhat 7. We work on a secured environment with no internet access.
We download the packages here https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and then, we push the package to the server and install them via rpm command .
Do you think this is a correct way to upgrade gluster when working without internet access?
Thanks in advance
2017 Oct 04
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 04, 2017 at 09:44:44AM +0000, ismael mondiu wrote:
> Hello,
>
> I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6.
>
> It's hard to find upgrade guides for a minor release. Can you help me please ?
Packages for GlusterFS 3.10.6 are available in the testing repository of
the
2017 Aug 18
0
Glusterd not working with systemd in redhat 7
On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>>
>> You're hitting a race here. By the time glusterd tries to resolve the
>> address of one of the remote bricks of a particular volume, the n/w
>> interface is not up by that
2017 Aug 18
0
Glusterd not working with systemd in redhat 7
You're hitting a race here. By the time glusterd tries to resolve the
address of one of the remote bricks of a particular volume, the n/w
interface is not up by that time. We have fixed this issue in mainline and
3.12 branch through the following commit:
commit 1477fa442a733d7b1a5ea74884cac8f29fbe7e6a
Author: Gaurav Yadav <gyadav at redhat.com>
Date: Tue Jul 18 16:23:18 2017 +0530
2017 Oct 04
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 04, 2017 at 12:17:23PM +0000, ismael mondiu wrote:
>
> Thanks Niels,
>
> We want to install it on redhat 7. We work on a secured environment
> with no internet access.
>
> We download the packages here
> https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and
> then, we push the package to the server and install them via rpm
> command .