Displaying 20 results from an estimated 1100 matches similar to: "one brick one volume process dies?"
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
This means shd client is not able to establish the connection with the
brick on port 49155. Now this could happen if glusterd has ended up
providing a stale port back which is not what brick is listening to. If you
had killed any brick process using sigkill signal instead of sigterm this
is expected as portmap_signout is not received by glusterd in the former
case and the old portmap entry is
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on
2017 Aug 29
0
modifying data via fues causes heal problem
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 0
Brick
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue.
Info file on node 10.5.6.17 consist of an additional property
"tier-enabled" which is not present in info file from other 3 nodes, hence
when gluster peer probe call is made, in order to maintain consistency
across the cluster cksum is compared. In this
case as both files are different leading to different cksum, causing state
in
2017 Aug 29
3
peer rejected but connected
hi fellas,
same old same
in log of the probing peer I see:
...
2017-08-29 13:36:16.882196] I [MSGID: 106493]
[glusterd-handler.c:3020:__glusterd_handle_probe_query]
0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0,
op_errno: 0, ret: 0
[2017-08-29 13:36:16.904961] I [MSGID: 106490]
[glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req]
0-glusterd: Received probe from uuid:
2017 Aug 30
0
peer rejected but connected
Could you please send me "info" file which is placed in
"/var/lib/glusterd/vols/<vol-name>" directory from all the nodes along with
glusterd.logs and command-history.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 7:13 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi fellas,
> same old same
> in log of the probing peer I see:
> ...
> 2017-08-29
2015 Jun 17
2
why would replicated to server ask for extra fs permissions?
I think I'm near getting a simple replication, but on server
which is still "empty" I get:
Initialization failed: Namespace '':
mkdir(/var/spool/mail/ccnr.biotechnology/nr412/Maildir)
failed: Permission denied (euid=1187(nr412) egid=513(Domain
Users) missing +w perm: /var/spool/mail, we're not in group
12(mail), dir owned by 0:12 mode=0775
but repl from server runs
2016 Nov 09
4
CTDB and samba private dir (+ldap)
hi everyone
an attempt to set up a cluster, I'm reading around and see
some howto writers would say to put "private dir on the FS
cluster" - one question I have: is this correct? necessary?
I have partial success, I get:
$ ctdb status
Number of nodes:2
pnn:0 10.5.6.32 OK
pnn:1 10.5.6.49 UNHEALTHY (THIS NODE)
Generation:323266562
Size:2
hash:0 lmaster:0
hash:1
2016 Aug 20
4
What is broken with fail2ban
Hello List,
with CentOS 7.2 it is not longer possible to run fail2ban on a Server ?
I install a new CentOS 7.2 and the EPEL directory
yum install fail2ban
I don't change anything only I create a jail.local to enable the Filters
[sshd]
enabled = true
....
.....
When I start afterward fail2ban
systemctl status fail2ban is clean
But systemctl status firewalld is broken
? firewalld.service -
2020 May 05
1
Samba update to 4.10 (with c7.8) - broken -?
hi guys
I've just let the system to get big update to 7.8 and with
it came new Samba(4.10.4-10.el7.x86_64), a version with now
fails on my boxes for no apparent reasons.
My Samba uses LDAP backend but I can see no errors related
to that neither.
...
[2020/05/05 15:53:37.093041,? 0]
../../lib/util/become_daemon.c:136(daemon_ready)
? daemon_ready: daemon 'smbd' finished starting up and