Displaying 20 results from an estimated 299 matches for "peljasz".
2017 Sep 07
0
peer rejected but connected
Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even though....
>
> and then I ask...
2017 Sep 01
2
peer rejected but connected
...tency arise due to upgrade you did.
Workaround:
1.Go to node 10.5.6.17
2.Open info file from "/var/lib/glusterd/vols/<vol-name>/info" and remove
"tier-enabled=0".
3.Restart glusterd services
4.Peer probe again.
Thanks
Gaurav
On Thu, Aug 31, 2017 at 3:37 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> attached the lot as per your request.
>
> Would bee really great if you can find the root cause of this and suggest
> a resolution. Fingers crossed.
> thanks, L.
>
> On 31/08/17 05:34, Gaurav Yadav wrote:
>
>> Could you please sendentire co...
2017 Sep 04
0
peer rejected but connected
...ersists please provide me following logs (working-cluster
+ newly added peer)
1. glusterd.info file from /var/lib/glusterd from all nodes
2. glusterd.logs from all nodes
3. info file from all the nodes.
4. cmd-history from all the nodes.
Thanks
Gaurav
On Mon, Sep 4, 2017 at 2:09 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> I do not see, did you write anything?
>
> On 03/09/17 11:54, Gaurav Yadav wrote:
>
>>
>>
>> On Fri, Sep 1, 2017 at 9:02 PM, lejeczek <peljasz at yahoo.co.uk <mailto:
>> peljasz at yahoo.co.uk>> wrote:
>>
>>...
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
.... If you
had killed any brick process using sigkill signal instead of sigterm this
is expected as portmap_signout is not received by glusterd in the former
case and the old portmap entry is never wiped off.
Please restart glusterd service. This should fix the problem.
On Tue, 1 Aug 2017 at 23:03, peljasz <peljasz at yahoo.co.uk> wrote:
> how critical is above?
> I get plenty of these on all three peers.
>
> hi guys
>
> I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
> behavior.
> I see: $gluster vol status $_vol detail; takes long timeand
> mos...
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs, cmd_history.log.
>>...
2017 Sep 13
3
one brick one volume process dies?
...Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9
> vols.
> What I see, unfortunately, is one brick fails in one
> vol, when it happens it's always the same vol on t...
2017 Sep 28
1
one brick one volume process dies?
...13 Sep 2017 at 16:28, Gaurav Yadav
> <gyadav at redhat.com <mailto:gyadav at redhat.com>> wrote:
>
> Please send me the logs as well i.e glusterd.logs
> and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>>
> wrote:
>
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
> Please provide the output of gluster
> volume info, gluster volume status and
> gluster...
2017 Aug 01
2
"other names" - how to clean/get rid of ?
hi
how to get rid of entries in "Other names" ?
thanks
L.
2017 Aug 02
0
"other names" - how to clean/get rid of ?
Are you referring to other names of peer status output? If so, then a
peerinfo entry having other names populated means it might be having
multiple n/w interfaces or the reverse address resolution is picking this
name. But why are you worried on the this part?
On Tue, 1 Aug 2017 at 23:24, peljasz <peljasz at yahoo.co.uk> wrote:
> hi
>
> how to get rid of entries in "Other names" ?
>
> thanks
> L.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/...
2017 Sep 13
2
one brick one volume process dies?
...rocess went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>>
>>
>> On 13/09/17 06:21, Gaurav Yadav wrote:
>>
> Please provide the output of gluster volume info, gluster volume status
>>> and gluster peer status.
>>>
>>> Apart from above info, please provide glusterd logs...
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command which succeeded, so now cluster comprise of
> three...
2017 Sep 04
2
heal info OK but statistics not working
...nsuccessful on bricks that are down. Please check
if all brick processes are running.
On 04/09/17 11:47, Atin Mukherjee wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> On Mon, Sep 4, 2017 at 4:07 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics o...
2017 Sep 13
0
one brick one volume process dies?
...a volume start
> force should resolve the issue.
>
> On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
>
>> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>>
>>
>> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>>
>>>
>>>
>>> On 13/09/17 06:21, Gaurav Yadav wrote:
>>>
>> Please provide the output of gluster volume info, gluster volume status
>>>> and gluster peer status.
>>>>
>>>> Apart from abov...
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote:
>
>
> On Tue, May 1, 2018 at 5:46 AM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol,
> to a regular fs and removing acl works as expected.
>...
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29 13:36:16.882196] I [MSGID: 106493]
> [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd:
> Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0...
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2018 May 01
0
unable to remove ACLs
On Tue, May 1, 2018 at 5:46 AM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol, to a regular fs and
> removing acl works as expected.
> Inside mounted gluster vol I seem to be able to modify/remove ACLs...
2018 May 01
2
unable to remove ACLs
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol, to a
regular fs and removing acl works as expected.
Inside mounted gluster vol I seem to be able to
modify/remove ACLs for users, groups and masks but that one
simple, important thing does not work.
It is also not the case of default ACLs being enforced from
the parent, for I
2017 Jul 24
0
vol status detail - times out?
...es it could as depending on number of bricks there might be too many brick
ops involved. This is the reason we introduced --timeout option in CLI
which can be used to have a larger time out value. However this fix is
available from release-3.9 onwards.
On Mon, Jul 24, 2017 at 3:54 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas
>
> would you know what could be the problem with: vol status detail times out
> always?
> After I did above I had to restart glusterd on the peer which had the
> command issued.
> I run 3.8.14. Everything seems to work a ok.
>
> man...