Displaying 20 results from an estimated 20000 matches similar to: "glusterd-locks.c:572:glusterd_mgmt_v3_lock"
2018 Mar 16
0
Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
Have sent a backport request https://review.gluster.org/19730 at
release-3.10 branch. Hopefully this fix will be picked up in next update.
On Fri, Mar 16, 2018 at 4:47 PM, Marco Lorenzo Crociani <
marcoc at prismatelecomtesting.com> wrote:
> Hi,
> I'm hitting bug https://bugzilla.redhat.com/show_bug.cgi?id=1442983
> on glusterfs 3.10.11 and oVirt 4.1.9 (and before on glusterfs
2018 Mar 17
1
Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
Hi,
this patch it's already available in the community version of gluster
3.12? In which version? If not, there is plan to backport it?
Greetings,
??? Paolo
Il 16/03/2018 13:24, Atin Mukherjee ha scritto:
> Have sent a backport request https://review.gluster.org/19730 at
> release-3.10 branch. Hopefully this fix will be picked up in next update.
>
> On Fri, Mar 16, 2018 at
2018 Mar 16
3
Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
Hi,
I'm hitting bug https://bugzilla.redhat.com/show_bug.cgi?id=1442983
on glusterfs 3.10.11 and oVirt 4.1.9 (and before on glusterfs 3.8.14)
The bug report says fixed in glusterfs-3.12.2-1
Is there a plan to backport the fix to 3.10.x releases or the only way
to fix is upgrade to 3.12?
Regards,
--
Marco Crociani
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes.
On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it>
wrote:
> Hi list,
>
> recently I've noted a strange behaviour of my gluster storage, sometimes
> while executing a simple command like "gluster volume status
> vm-images-repo" as a response I got "Another transaction
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that
multiple commands on the same volume are run simultaneously which can
result into transactions collision and you can end up with one command
succeeding and others failing. Ideally if you are running volume status
command for monitoring it's suggested to be run from only one node.
On Thu, Jul 20, 2017 at 3:54 PM, Paolo
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin,
I've initially disabled gluster status check on all nodes except on one
on my nagios instance as you recommended but this issue happens again.
So I've disabled it on each nodes but the error happens again, currently
only oVirt is monitoring gluster.
I cannot modify this behaviour in the oVirt GUI, there is anything that
could I do from the gluster prospective to solve this
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list,
recently I've noted a strange behaviour of my gluster storage, sometimes
while executing a simple command like "gluster volume status
vm-images-repo" as a response I got "Another transaction is in progress
for vm-images-repo. Please try again after sometime.". This situation
does not get solved simply waiting for but I've to restart glusterd on
the node that
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin,
in attachment all the requested logs.
Considering that I'm using gluster as a storage system for oVirt I've
checked also these logs and I've seen that almost every commands on all
the three nodes are executed by the supervdsm daemon and not only by the
SPM node. Could be this the root cause of this problem?
Greetings,
Paolo
PS: could you suggest a better method than
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes.
thanks,
Paolo
Il 20/07/2017 11:38, Atin Mukherjee ha scritto:
> Please share the cmd_history.log file from all the storage nodes.
>
> On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara
> <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote:
>
> Hi list,
>
> recently I've
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all
nodes except on one, I'll check if this is enough.
Thanks,
Paolo
Il 20/07/2017 13:50, Atin Mukherjee ha scritto:
> So from the cmd_history.logs across all the nodes it's evident that
> multiple commands on the same volume are run simultaneously which can
> result into transactions collision and you can
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you
shouldn't get into this situation. Can you please help me with the latest
cmd_history & glusterd log files from all the nodes?
On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it>
wrote:
> Hi Atin,
>
> I've initially disabled gluster status check on all nodes except on one on
2018 Apr 09
2
ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")
On 06/04/2018 19:33, Shyam Ranganathan wrote:
> Hi,
>
> We postponed this and I did not announce this to the lists. The number
> of bugs fixed against 3.10.12 is low, and I decided to move this to the
> 30th of Apr instead.
>
> Is there a specific fix that you are looking for in the release?
>
Hi,
yes, it's this: https://review.gluster.org/19730
2018 Apr 12
1
ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")
On 09/04/2018 21:36, Shyam Ranganathan wrote:
> On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
>> On 06/04/2018 19:33, Shyam Ranganathan wrote:
>>> Hi,
>>>
>>> We postponed this and I did not announce this to the lists. The number
>>> of bugs fixed against 3.10.12 is low, and I decided to move this to the
>>> 30th of Apr instead.
2018 Apr 06
2
ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")
Hi,
are there any news for 3.10.12 release?
Regards,
--
Marco Crociani
2012 Jan 13
5
Can't resize second device in RAID1
Hi,
the situation:
Label: ''RootFS'' uuid: c87975a0-a575-405e-9890-d3f7f25bbd96
Total devices 2 FS bytes used 284.98GB
devid 2 size 311.82GB used 286.51GB path /dev/sdb3
devid 1 size 897.76GB used 286.51GB path /dev/sda3
RootFS created when sda3 was 897.76GB and sdb3 311.82GB.
I have now freed other space on sdb. So I deleted sdb3 and recreated
it occupying all
2018 Apr 06
0
ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")
Hi,
We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
30th of Apr instead.
Is there a specific fix that you are looking for in the release?
Thanks,
Shyam
On 04/06/2018 11:47 AM, Marco Lorenzo Crociani wrote:
> Hi,
> are there any news for 3.10.12 release?
>
> Regards,
>
2018 Apr 09
0
ETA for 3.10.12 (was "Planned for the 30th of Mar, 2018")
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
> On 06/04/2018 19:33, Shyam Ranganathan wrote:
>> Hi,
>>
>> We postponed this and I did not announce this to the lists. The number
>> of bugs fixed against 3.10.12 is low, and I decided to move this to the
>> 30th of Apr instead.
>>
>> Is there a specific fix that you are looking for in the release?
2012 May 03
1
[PATCH] Btrfs: fix crash in scrub repair code when device is missing
Fix that when scrub tries to repair an I/O or checksum error and one of
the devices containing the mirror is missing, it crashes in bio_add_page
because the bdev is a NULL pointer for missing devices.
Reported-by: Marco L. Crociani <marco.crociani@gmail.com>
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
---
fs/btrfs/scrub.c | 7 +++++++
1 file changed, 7 insertions(+)
2018 Feb 26
0
rpc/glusterd-locks error
Good morning.
We have a 6 node cluster. 3 nodes are participating in a replica 3 volume.
Naming convention:
xx01 - 3 nodes participating in ovirt_vol
xx02 - 3 nodes NOT particpating in ovirt_vol
Last week, restarted glusterd on each node in cluster to update (one at a
time).
The three xx01 nodes all show the following in glusterd.log:
[2018-02-26 14:31:47.330670] E
2006 Aug 30
0
xen-linux-system-2.6.17-2-xen-amd64 is broken?
Hi,
I have updated my debian etch and it stop booting with xen. I see the
new 2.6.17 kernel in unstable and so I have installed it but also this
doesn't work (it reboots during boot).
--
Marco Crociani - Tyrael
* Perch? usare Formati Aperti? - http://www.openformats.org
* Debian GNU/Linux - http://www.debian.org