Displaying 20 results from an estimated 10000 matches similar to: "in a replicated setup, how safe is it to take a brick down"
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 12
2
one brick one volume process dies?
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does not help, only
system reboot seem to help, until it happens, next time.
How to troubleshoot this
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2018 Jan 03
1
@redhat - someone could take a look or ask about - freeipa-users@redhat.com
sorry guys to spam a bit - I hope someone from redhat could
check whether - freeipa-users at redhat.com - is up & ok?
I've been a subscriber for a couple of years but now,
suddenly(?) I cannot mail there, I get:
"
Sorry, we were unable to deliver your message to the
following address.
<freeipa-users at redhat.com>:
554: 5.7.1 <freeipa-users at redhat.com>: Recipient
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2023 Apr 19
2
bash test ?
On 19/04/2023 08:04, wwp wrote:
> Hello lejeczek,
>
>
> On Wed, 19 Apr 2023 07:50:29 +0200 lejeczek via CentOS <centos at centos.org> wrote:
>
>> Hi guys.
>>
>> I cannot wrap my hear around this:
>>
>> -> $ unset _Val; test -z ${_Val}; echo $?
>> 0
>> -> $ unset _Val; test -n ${_Val}; echo $?
>> 0
>> -> $
2018 May 02
1
unable to remove ACLs
On 01/05/18 23:59, Vijay Bellur wrote:
>
>
> On Tue, May 1, 2018 at 5:46 AM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
>
> hi guys
>
> I have a simple case of:
> $ setfacl -b
> not working!
> I copy a folder outside of autofs mounted gluster vol,
> to a regular fs and removing acl works as
2015 Jun 23
3
is it safe to have two backed used for the same user?
hi everybody
I wonder if it is safe (and wise) to have two passw-user
databases for the same one user.
I'm thinking,
mail to me via pam
mail to me at this.domain via ldap
whole Maildir would be essentially the same one storage
target, I see permissions have to be mangled, available to
write for both vmail and actual uid.
what do you think? Is it how it's done?
regards
2015 Jun 17
2
why would replicated to server ask for extra fs permissions?
I think I'm near getting a simple replication, but on server
which is still "empty" I get:
Initialization failed: Namespace '':
mkdir(/var/spool/mail/ccnr.biotechnology/nr412/Maildir)
failed: Permission denied (euid=1187(nr412) egid=513(Domain
Users) missing +w perm: /var/spool/mail, we're not in group
12(mail), dir owned by 0:12 mode=0775
but repl from server runs
2018 Oct 09
5
mount points @install time
hi everyone,
is there a way to add custom mount points at installation point?
And if there is would you say /usr should/could go onto a separate
partition?
many thanks, L.
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2015 Jun 19
3
how do I conceptualize system & virtual users?
I guess this would be a common case, I am hoping for some
final clarification.
a few Linux boxes share ldap (multi-master) backend that
PAM/SSSD uses to authenticated users, and these LDAPs are
also is used by Samba, users start @ uid 1000.
Boxes are in the same both DNS and Samba domains.
Do I treat these users as system or virtual users from
postfix/dovecot perspective?
If it can be a
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2015 Jun 19
1
how do I conceptualize system & virtual users?
On 19/06/15 15:13, Mauricio Tavares wrote:
> On Jun 19, 2015 9:08 AM, "lejeczek" <peljasz at yahoo.co.uk> wrote:
>> I guess this would be a common case, I am hoping for some final
> clarification.
>> a few Linux boxes share ldap (multi-master) backend that PAM/SSSD uses to
> authenticated users, and these LDAPs are also is used by Samba, users start
> @ uid
2018 May 01
2
unable to remove ACLs
hi guys
I have a simple case of:
$ setfacl -b
not working!
I copy a folder outside of autofs mounted gluster vol, to a
regular fs and removing acl works as expected.
Inside mounted gluster vol I seem to be able to
modify/remove ACLs for users, groups and masks but that one
simple, important thing does not work.
It is also not the case of default ACLs being enforced from
the parent, for I
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29