search for: 34.456638

Displaying 8 results from an estimated 8 matches for "34.456638".

Did you mean: 34.456619
2017 Jun 21
2
Gluster failure due to "0-management: Lock not released for <volumename>"
Hi All, I'm fairly new to Gluster (3.10.3) and got it going for a couple of months now but suddenly after a power failure in our building it all came crashing down. No client is able to connect after powering back the 3 nodes I have setup. Looking at the logs, it looks like there's some sort of "Lock" placed on the volume which prevents all the clients from connecting to
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Could you attach glusterd.log and cmd_history.log files from all the nodes? On Wed, Jun 21, 2017 at 11:40 PM, Victor Nomura <victor at mezine.com> wrote: > Hi All, > > > > I?m fairly new to Gluster (3.10.3) and got it going for a couple of months > now but suddenly after a power failure in our building it all came crashing > down. No client is able to connect after
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
I had looked at the logs shared by Victor privately and it seems to be there is a N/W glitch in the cluster which is causing the glusterd to lose its connection with other peers and as a side effect to this, lot of rpc requests are getting bailed out resulting glusterd to end up into a stale lock and hence you see that some of the commands failed with "another transaction is in progress or
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Thanks for the reply. What would be the best course of action? The data on the volume isn?t important right now but I?m worried when our setup goes to production we don?t have the same situation and really need to recover our Gluster setup. I?m assuming that to redo is to delete everything in the /var/lib/glusterd directory on each of the nodes and recreate the volume again. Essentially
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
On Thu, 29 Jun 2017 at 22:51, Victor Nomura <victor at mezine.com> wrote: > Thanks for the reply. What would be the best course of action? The data > on the volume isn?t important right now but I?m worried when our setup goes > to production we don?t have the same situation and really need to recover > our Gluster setup. > > > > I?m assuming that to redo is to
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Specifically, I must stop glusterfs-server service on the other nodes in order to perform any gluster commands on any node. From: Victor Nomura [mailto:victor at mezine.com] Sent: July-04-17 9:41 AM To: 'Atin Mukherjee' Cc: 'gluster-users' Subject: RE: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>" The nodes have
2017 Jul 05
1
Gluster failure due to "0-management: Lock not released for <volumename>"
By any chance are you having any redundant peer entries in /var/lib/glusterd/peers directory? Can you please share the content of this folder from all the nodes? On Tue, Jul 4, 2017 at 11:55 PM, Victor Nomura <victor at mezine.com> wrote: > Specifically, I must stop glusterfs-server service on the other nodes in > order to perform any gluster commands on any node. > > > >
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
The nodes have all been rebooted numerous times with no difference in outcome. The nodes are all connected to the same switch and I also replaced it to see if made any difference. There is no issues with connectivity network wise and no firewall in place between the nodes. I can?t do a gluster volume status without it timing out the moment the other 2 nodes are connected to the switch.