similar to: Gluster failure due to "0-management: Lock not released for <volumename>"

Displaying 20 results from an estimated 600 matches similar to: "Gluster failure due to "0-management: Lock not released for <volumename>""

2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
I had looked at the logs shared by Victor privately and it seems to be there is a N/W glitch in the cluster which is causing the glusterd to lose its connection with other peers and as a side effect to this, lot of rpc requests are getting bailed out resulting glusterd to end up into a stale lock and hence you see that some of the commands failed with "another transaction is in progress or
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Could you attach glusterd.log and cmd_history.log files from all the nodes? On Wed, Jun 21, 2017 at 11:40 PM, Victor Nomura <victor at mezine.com> wrote: > Hi All, > > > > I?m fairly new to Gluster (3.10.3) and got it going for a couple of months > now but suddenly after a power failure in our building it all came crashing > down. No client is able to connect after
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Thanks for the reply. What would be the best course of action? The data on the volume isn?t important right now but I?m worried when our setup goes to production we don?t have the same situation and really need to recover our Gluster setup. I?m assuming that to redo is to delete everything in the /var/lib/glusterd directory on each of the nodes and recreate the volume again. Essentially
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
On Thu, 29 Jun 2017 at 22:51, Victor Nomura <victor at mezine.com> wrote: > Thanks for the reply. What would be the best course of action? The data > on the volume isn?t important right now but I?m worried when our setup goes > to production we don?t have the same situation and really need to recover > our Gluster setup. > > > > I?m assuming that to redo is to
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Specifically, I must stop glusterfs-server service on the other nodes in order to perform any gluster commands on any node. From: Victor Nomura [mailto:victor at mezine.com] Sent: July-04-17 9:41 AM To: 'Atin Mukherjee' Cc: 'gluster-users' Subject: RE: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>" The nodes have
2017 Jul 05
1
Gluster failure due to "0-management: Lock not released for <volumename>"
By any chance are you having any redundant peer entries in /var/lib/glusterd/peers directory? Can you please share the content of this folder from all the nodes? On Tue, Jul 4, 2017 at 11:55 PM, Victor Nomura <victor at mezine.com> wrote: > Specifically, I must stop glusterfs-server service on the other nodes in > order to perform any gluster commands on any node. > > > >
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
The nodes have all been rebooted numerous times with no difference in outcome. The nodes are all connected to the same switch and I also replaced it to see if made any difference. There is no issues with connectivity network wise and no firewall in place between the nodes. I can?t do a gluster volume status without it timing out the moment the other 2 nodes are connected to the switch.
2017 Dec 20
2
Upgrading from Gluster 3.8 to 3.12
Looks like a bug as I see tier-enabled = 0 is an additional entry in the info file in shchhv01. As per the code, this field should be written into the glusterd store if the op-version is >= 30706 . What I am guessing is since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on op-version bump up" in 3.8.4 while bumping up the op-version the info and volfiles were
2017 Dec 19
2
Upgrading from Gluster 3.8 to 3.12
I have not done the upgrade yet. Since this is a production cluster I need to make sure it stays up or schedule some downtime if it doesn't doesn't. Thanks. On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > > On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki <ziemowit at pierzycki.com> > wrote: >> >> Hi, >>
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
I was attempting the same on a local sandbox and also have the same problem. Current: 3.8.4 Volume Name: shchst01 Type: Distributed-Replicate Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3 Status: Started Snapshot Count: 0 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: shchhv01-sto:/data/brick3/shchst01 Brick2: shchhv02-sto:/data/brick3/shchst01 Brick3:
2017 Dec 20
0
Upgrading from Gluster 3.8 to 3.12
Yes Atin. I'll take a look. On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > Looks like a bug as I see tier-enabled = 0 is an additional entry in the > info file in shchhv01. As per the code, this field should be written into > the glusterd store if the op-version is >= 30706 . What I am guessing is > since we didn't have the commit
2009 May 28
2
Glusterfs 2.0 hangs on high load
Hello! After upgrade to version 2.0, now using 2.0.1, I'm experiencing problems with glusterfs stability. I'm running 2 node setup with cliet side afr, and glusterfsd also is running on same servers. Time to time glusterfs just hangs, i can reproduce this running iozone benchmarking tool. I'm using patched Fuse, but same result is with unpatched.
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover that when one of replicate node reboot and startup the glusterd daemon,the gluster will crash cause by the other replicate node cpu usage reach 100%. Our gluster info: Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Options Reconfigured: performance.cache-size: 3GB
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs attached now. On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote: > > > On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> >> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote: >>
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2011 Oct 18
2
gluster rebalance taking three months
Hi guys, we have a rebalance running on eight bricks since July and this is what the status looks like right now: ===Tue Oct 18 13:45:01 CST 2011 ==== rebalance step 1: layout fix in progress: fixed layout 223623 There are roughly 8T photos in the storage,so how long should this rebalance take? What does the number (in this case) 22362 represent? Our gluster infomation: Repository
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin, I've gotten around to this and was able to get upgrade done using 3.7.0 before moving to 3.11. For some reason 3.7.9 wasn't working well. On 3.11 though I notice that gluster/nfs is really made optional and nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha on new clusters but would like to have glusterfs-gnfs on existing clusters so a seamless upgrade
2018 May 02
3
Healing : No space left on device
Hello list, I have an issue on my Gluster cluster. It is composed of two data nodes and an arbiter for all my volumes. After having upgraded my bricks to gluster 3.12.9 (Fedora 27), this is what I get : ??? - on node 1, volumes won't start, and glusterd.log shows a lot of : ??? ??? [2018-05-02 09:46:06.267817] W [glusterd-locks.c:843:glusterd_mgmt_v3_unlock]
2017 Nov 06
2
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
Do the users have permission to see/interact with the directories, in addition to the files? On Mon, Nov 6, 2017 at 1:55 PM, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > Please provide the gluster volume info. Do you see any errors in the > client mount log file (/var/log/glusterfs/var-lib-mountedgluster.log)? > > > Thanks, > Nithya > > On 6