similar to: mount points @install time

Displaying 20 results from an estimated 8000 matches similar to: "mount points @install time"

2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue. Info file on node 10.5.6.17 consist of an additional property "tier-enabled" which is not present in info file from other 3 nodes, hence when gluster peer probe call is made, in order to maintain consistency across the cluster cksum is compared. In this case as both files are different leading to different cksum, causing state in
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network, from the rest of the cluster. 2) that unavailable(while it was unavailable) peer got detached with "gluster peer detach" command which succeeded, so now cluster comprise of three peers 3) Self-heal daemon (for some reason) does not start(with an attempt to restart glusted) on the peer which probed that fourth peer. 4) fourth
2023 Apr 19
2
bash test ?
On 19/04/2023 08:04, wwp wrote: > Hello lejeczek, > > > On Wed, 19 Apr 2023 07:50:29 +0200 lejeczek via CentOS <centos at centos.org> wrote: > >> Hi guys. >> >> I cannot wrap my hear around this: >> >> -> $ unset _Val; test -z ${_Val}; echo $? >> 0 >> -> $ unset _Val; test -n ${_Val}; echo $? >> 0 >> -> $
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote: > > > On Tue, May 1, 2018 at 5:46 AM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote: > > hi guys > > I have a simple case of: > $ setfacl -b > not working! > I copy a folder outside of autofs mounted gluster vol, > to a regular fs and removing acl works as
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2018 May 01
2
unable to remove ACLs
hi guys I have a simple case of: $ setfacl -b not working! I copy a folder outside of autofs mounted gluster vol, to a regular fs and removing acl works as expected. Inside mounted gluster vol I seem to be able to modify/remove ACLs for users, groups and masks but that one simple, important thing does not work. It is also not the case of default ACLs being enforced from the parent, for I
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in "/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with glusterd.logs and command-history. Thanks Gaurav On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas, > same old same > in log of the probing peer I see: > ... > 2017-08-29
2017 Jul 24
2
vol status detail - times out?
hi fellas would you know what could be the problem with: vol status detail times out always? After I did above I had to restart glusterd on the peer which had the command issued. I run 3.8.14. Everything seems to work a ok. many thanks L.
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick, If one of the self heal process is down, will the statstics heal-count command work? On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > 1) one peer, out of four, got separated from the network, from the rest of > the cluster. > 2) that unavailable(while it was unavailable) peer got detached with > "gluster peer detach" command
2015 Jun 19
3
how do I conceptualize system & virtual users?
I guess this would be a common case, I am hoping for some final clarification. a few Linux boxes share ldap (multi-master) backend that PAM/SSSD uses to authenticated users, and these LDAPs are also is used by Samba, users start @ uid 1000. Boxes are in the same both DNS and Samba domains. Do I treat these users as system or virtual users from postfix/dovecot perspective? If it can be a
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 12
2
one brick one volume process dies?
hi everyone I have 3-peer cluster with all vols in replica mode, 9 vols. What I see, unfortunately, is one brick fails in one vol, when it happens it's always the same vol on the same brick. Command: gluster vol status $vol - would show brick not online. Restarting glusterd with systemclt does not help, only system reboot seem to help, until it happens, next time. How to troubleshoot this
2012 Apr 25
1
dbench & similar - as a valid benchmark
hi everybody would a tool such as dbench be a valid bechmark for gluster? and, most importantly, is there any formula to estimate raw fs to gluster performance ratio for different setups? for instance: having a replicated volume, two bricks, fuse mountpoint to volume via non-congested 1Gbps or even a volume on single brick with fuse client mountpoing locally what percentage/fraction of raw
2018 May 01
0
unable to remove ACLs
On Tue, May 1, 2018 at 5:46 AM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi guys > > I have a simple case of: > $ setfacl -b > not working! > I copy a folder outside of autofs mounted gluster vol, to a regular fs and > removing acl works as expected. > Inside mounted gluster vol I seem to be able to modify/remove ACLs for > users, groups and masks but that one
2017 Jul 24
0
vol status detail - times out?
Yes it could as depending on number of bricks there might be too many brick ops involved. This is the reason we introduced --timeout option in CLI which can be used to have a larger time out value. However this fix is available from release-3.9 onwards. On Mon, Jul 24, 2017 at 3:54 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi fellas > > would you know what could be the
2015 Jun 19
1
how do I conceptualize system & virtual users?
On 19/06/15 15:13, Mauricio Tavares wrote: > On Jun 19, 2015 9:08 AM, "lejeczek" <peljasz at yahoo.co.uk> wrote: >> I guess this would be a common case, I am hoping for some final > clarification. >> a few Linux boxes share ldap (multi-master) backend that PAM/SSSD uses to > authenticated users, and these LDAPs are also is used by Samba, users start > @ uid