Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even though....
>
> and then I asked:
>
> I there anything I should double check & make sure all
> is 100% fine before I use that newly added peer for
> bricks?
>
> below is my full message. Basically, new peers do not get rejected any
> more.
>
>
> On 04/09/17 13:56, Gaurav Yadav wrote:
>
>>
>> Executing "gluster volume set all cluster.op-version
<op-version>"on all
>> the existing nodes will solve this problem.
>>
>> If issue still persists please provide me following logs
>> (working-cluster + newly added peer)
>> 1. glusterd.info <http://glusterd.info> file from
/var/lib/glusterd from
>> all nodes
>> 2. glusterd.logs from all nodes
>> 3. info file from all the nodes.
>> 4. cmd-history from all the nodes.
>>
>> Thanks
>> Gaurav
>>
>> On Mon, Sep 4, 2017 at 2:09 PM, lejeczek <peljasz at yahoo.co.uk
<mailto:
>> peljasz at yahoo.co.uk>> wrote:
>>
>> I do not see, did you write anything?
>>
>> On 03/09/17 11:54, Gaurav Yadav wrote:
>>
>>
>>
>> On Fri, Sep 1, 2017 at 9:02 PM, lejeczek
>> <peljasz at yahoo.co.uk <mailto:peljasz at
yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>> wrote:
>>
>> you missed my reply before?
>> here:
>>
>> now, a "weir" thing
>>
>> I did that, still fourth peer rejected, still
>> fourth
>> probe would fail to restart(all after upping
>> to 31004)
>> I redone, wiped and re-probed from a different
>> peer
>> than I did from first time and now not rejected.
>> now I'm restarting fourth(newly added) peer's
>> glusterd
>> and.. it seems to work.(even though
>> tier-enabled=0 is
>> still there, now on all four peers, was not
>> there on
>> three before working peers)
>>
>> I there anything I should double check & make
>> sure all
>> is 100% fine before I use that newly added
>> peer for
>> bricks?
>>
>> For this only I need logs to see what has
>> gone wrong.
>>
>>
>> Please provide me following things
>> (working-cluster + newly added peer)
>> 1. glusterd.info <http://glusterd.info>
>> <http://glusterd.info> <http://glusterd.info> file
>> from /var/lib/glusterd from all nodes
>> 2. glusterd.logs from all nodes
>> 3. info file from all the nodes.
>> 4. cmd-history from all the nodes.
>>
>>
>> On 01/09/17 11:11, Gaurav Yadav wrote:
>>
>> I replicate the problem locally and with
>> the steps
>> I suggested you, it worked for me...
>>
>> Please provide me following things
>> (working-cluster + newly added peer)
>> 1. glusterd.info <http://glusterd.info>
>> <http://glusterd.info>
>> <http://glusterd.info> file from
>> /var/lib/glusterd
>> from all nodes
>> 2. glusterd.logs from all nodes
>> 3. info file from all the nodes.
>> 4. cmd-history from all the nodes.
>>
>>
>> On Fri, Sep 1, 2017 at 3:39 PM, lejeczek
>> <peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>>> wrote:
>>
>> Like I said, I upgraded from 3.8 to 3.10 a
>> while ago,
>> at the moment 3.10.5, only now with
>> 3.10.5 I
>> tried to
>> add a peer.
>>
>> On 01/09/17 10:51, Gaurav Yadav wrote:
>>
>> What is gluster --version on all
>> these nodes?
>>
>> On Fri, Sep 1, 2017 at 3:18 PM,
>> lejeczek
>> <peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>
>>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>
>> <mailto:peljasz at yahoo.co.uk
>> <mailto:peljasz at yahoo.co.uk>>>>> wrote:
>>
>> on first node I got
>> $ gluster volume set all
>> cluster.op-version 31004
>> volume set: failed: Commit
>> failed on
>> 10.5.6.49. Please
>> check log file for details.
>>
>> but I immediately proceeded to
>> remaining nodes
>> and:
>>
>> $ gluster volume get all
>> cluster.op-version
>> Option Value
>> ------ -----
>> cluster.op-version 30712
>> $ gluster volume set all
>> cluster.op-version 31004
>> volume set: failed: Required
>> op-version
>> (31004) should
>> not be equal or lower than
>> current cluster
>> op-version
>> (31004).
>> $ gluster volume get all
>> cluster.op-version
>> Option Value
>> ------ -----
>> cluster.op-version 31004
>>
>> last, third node:
>>
>> $ gluster volume get all
>> cluster.op-version
>> Option Value
>> ------ -----
>> cluster.op-version 30712
>> $ gluster volume set all
>> cluster.op-version 31004
>> volume set: failed: Required
>> op-version
>> (31004) should
>> not be equal or lower than
>> current cluster
>> op-version
>> (31004).
>> $ gluster volume get all
>> cluster.op-version
>> Option Value
>> ------ -----
>> cluster.op-version 31004
>>
>> So, even though it failed as
>> above,
>> now I see that
>> it's 31004 on all three peers,
>> at least
>> according to
>> "volume get all
>> cluster.op-version"
>> command.
>>
>>
>> On 01/09/17 10:38, Gaurav
>> Yadav wrote:
>>
>> gluster volume set all
>> cluster.op-version
>> 31004
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170907/2e96bd47/attachment.html>