similar to: glusterd-locks.c:572:glusterd_mgmt_v3_lock

Displaying 20 results from an estimated 500 matches similar to: "glusterd-locks.c:572:glusterd_mgmt_v3_lock"

2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all nodes except on one, I'll check if this is enough. Thanks, Paolo Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > So from the cmd_history.logs across all the nodes it's evident that > multiple commands on the same volume are run simultaneously which can > result into transactions collision and you can
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that multiple commands on the same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, I've initially disabled gluster status check on all nodes except on one on my nagios instance as you recommended but this issue happens again. So I've disabled it on each nodes but the error happens again, currently only oVirt is monitoring gluster. I cannot modify this behaviour in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, in attachment all the requested logs. Considering that I'm using gluster as a storage system for oVirt I've checked also these logs and I've seen that almost every commands on all the three nodes are executed by the supervdsm daemon and not only by the SPM node. Could be this the root cause of this problem? Greetings, Paolo PS: could you suggest a better method than
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >>
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glusterd start > * yum versionlock add glusterfs* > *
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
Hi Pranith, I'm using this guide https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md Definitely my fault, but I think that is better to specify somewhere that restarting the service is not enough simply because in many other case, with other services, is sufficient. Now I'm restarting every brick process (and waiting for
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Hi all, for the upgrade I followed this procedure: * put node in maintenance mode (ensure no client are active) * yum versionlock delete glusterfs* * service glusterd stop * yum update * systemctl daemon-reload * service glusterd start * yum versionlock add glusterfs* * gluster volume heal vm-images-repo full * gluster volume heal vm-images-repo info on each server every time
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active)
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere that > restarting the service is not enough simply
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md
2017 Jun 20
2
Unable to get transaction opinfo for transaction ID gluster version 3.6
Hi, I have some blocked transactions. Does anybody have some advise on how I could mend this because I am unsure where to start? I believe this broke after I issued some set auth.allow commands: # gluster volume set oem-shared auth.allow 10.54.54.57,10.54.54.160,10.54.54.161,10.54.54.213,10.54.54.214,10.22.9.73,10.22.9.74 Kind regards, Sophie [2017-06-20 13:28:24.052623] E
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able