Displaying 20 results from an estimated 9000 matches similar to: "filesystem failure on one peer makes volume read only?"
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number
2017 Aug 01
2
"other names" - how to clean/get rid of ?
hi
how to get rid of entries in "Other names" ?
thanks
L.
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
This means shd client is not able to establish the connection with the
brick on port 49155. Now this could happen if glusterd has ended up
providing a stale port back which is not what brick is listening to. If you
had killed any brick process using sigkill signal instead of sigterm this
is expected as portmap_signout is not received by glusterd in the former
case and the old portmap entry is
2017 Aug 02
0
"other names" - how to clean/get rid of ?
Are you referring to other names of peer status output? If so, then a
peerinfo entry having other names populated means it might be having
multiple n/w interfaces or the reverse address resolution is picking this
name. But why are you worried on the this part?
On Tue, 1 Aug 2017 at 23:24, peljasz <peljasz at yahoo.co.uk> wrote:
> hi
>
> how to get rid of entries in "Other
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote:
> I thought you had removed vna as defective and then ADDED in vnh as
> the replacement?
>
> Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem IMHO.
--
Lindsay Mathieson
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
On Thu, 29 Jun 2017 at 22:51, Victor Nomura <victor at mezine.com> wrote:
> Thanks for the reply. What would be the best course of action? The data
> on the volume isn?t important right now but I?m worried when our setup goes
> to production we don?t have the same situation and really need to recover
> our Gluster setup.
>
>
>
> I?m assuming that to redo is to
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Thanks for the reply. What would be the best course of action? The data on the volume isn?t important right now but I?m worried when our setup goes to production we don?t have the same situation and really need to recover our Gluster setup.
I?m assuming that to redo is to delete everything in the /var/lib/glusterd directory on each of the nodes and recreate the volume again. Essentially
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth
2017 Jun 11
0
How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <lindsay.mathieson at gmail.com>
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> gluster peer detach command
2017 Jun 11
2
How to remove dead peer, osrry urgent again :(
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
gluster peer status
Number of Peers: 3
Hostname: vnh.proxmox.softlog
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are:
net.ipv4.ip_local_reserved_ports = 30000-32767
net.ipv4.ip_local_port_range = 32768 60999
meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue?
From: Atin Mukherjee [mailto:amukherj at redhat.com]
Sent: Thursday, June 15, 2017 10:56 AM
To: Guy Cukierman <guyc at elminda.com>
Cc:
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Jun 15
0
gluster peer probe failing
+Gaurav, he is the author of the patch, can you please comment here?
On Thu, Jun 15, 2017 at 3:28 PM, Guy Cukierman <guyc at elminda.com> wrote:
> Thanks, but my current settings are:
>
> net.ipv4.ip_local_reserved_ports = 30000-32767
>
> net.ipv4.ip_local_port_range = 32768 60999
>
> meaning the reserved ports are already in the short int range, so maybe I
>
2017 Jul 05
1
Gluster failure due to "0-management: Lock not released for <volumename>"
By any chance are you having any redundant peer entries in
/var/lib/glusterd/peers directory? Can you please share the content of this
folder from all the nodes?
On Tue, Jul 4, 2017 at 11:55 PM, Victor Nomura <victor at mezine.com> wrote:
> Specifically, I must stop glusterfs-server service on the other nodes in
> order to perform any gluster commands on any node.
>
>
>
>
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Specifically, I must stop glusterfs-server service on the other nodes in order to perform any gluster commands on any node.
From: Victor Nomura [mailto:victor at mezine.com]
Sent: July-04-17 9:41 AM
To: 'Atin Mukherjee'
Cc: 'gluster-users'
Subject: RE: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>"
The nodes have