search for: margara

Displaying 19 results from an estimated 19 matches for "margara".

2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on > my nagios instance as you recommended but this issue happens again. > > So I've disabled it on each nodes but the error happens again, cur...
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...8, Atin Mukherjee ha scritto: > Technically if only one node is pumping all these status commands, you > shouldn't get into this situation. Can you please help me with the > latest cmd_history & glusterd log files from all the nodes? > > On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Atin, > > I've initially disabled gluster status check on all nodes except > on one on my nagios instance as you recommended but this issue > happens again. > &gt...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...ltaneously which can > result into transactions collision and you can end up with one command > succeeding and others failing. Ideally if you are running volume > status command for monitoring it's suggested to be run from only one node. > > On Thu, Jul 20, 2017 at 3:54 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > In attachment the requested logs for all the three nodes. > > thanks, > > Paolo > > > Il 20/07/2017 11:38, Atin Mukherjee ha scritto: >> Please share...
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...ly only oVirt is monitoring gluster. I cannot modify this behaviour in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this issue? Considering that 3.8 is near EOL also upgrading to 3.10 could be an option. Greetings, Paolo Il 20/07/2017 15:37, Paolo Margara ha scritto: > > OK, on my nagios instance I've disabled gluster status check on all > nodes except on one, I'll check if this is enough. > > Thanks, > > Paolo > > > Il 20/07/2017 13:50, Atin Mukherjee ha scritto: >> So from the cmd_history.logs across...
2017 Jun 29
1
afr-self-heald.c:479:afr_shd_index_sweep
Il 29/06/2017 16:27, Pranith Kumar Karampuri ha scritto: > > > On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Pranith, > > I'm using this guide > https://github.com/nixpanic/glusterdocs/blob/f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > <https...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo Margara <paolo.margara at polito.it> wrote: > In attachment the requested logs for all the three nodes. > > thanks, > > Paolo > > Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've noted a strange behaviour of my gluster storage, > sometimes while executing a simple command like "gluster volume > status vm-images-re...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
On Thu, Jun 29, 2017 at 7:48 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Pranith, > > I'm using this guide https://github.com/nixpanic/glusterdocs/blob/ > f6d48dc17f2cb6ee4680e372520ec3358641b2bc/Upgrade-Guide/upgrade_to_3.8.md > > Definitely my fault, but I think that is better to specify somewhere t...
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
...Which document did you follow for the upgrade? We can fix the > documentation if there are any issues. > > On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>> wrote: > > On 06/29/2017 01:08 PM, Paolo Margara wrote: >> >> Hi all, >> >> for the upgrade I followed this procedure: >> >> * put node in maintenance mode (ensure no client are active) >> * yum versionlock delete glusterfs* >> * service glusterd stop >> * yum u...
2017 Jun 29
2
afr-self-heald.c:479:afr_shd_index_sweep
On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > * put node in maintenance mode (ensure no client are active) > * yum versionlock delete glusterfs* > * service glusterd stop > * yum update > * systemctl daemon-reload > * service glus...
2017 Jun 28
3
afr-self-heald.c:479:afr_shd_index_sweep
...ntered this problem. Currently all VM images appear to be OK but prior to create the 'entry-changes' I would like to ask if this is still the correct procedure to fix this issue and if this problem could have affected the heal operations occurred meanwhile. Thanks. Greetings, Paolo Margara
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
Paolo, Which document did you follow for the upgrade? We can fix the documentation if there are any issues. On Thu, Jun 29, 2017 at 2:07 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/29/2017 01:08 PM, Paolo Margara wrote: > > Hi all, > > for the upgrade I followed this procedure: > > - put node in maintenance mode (ensure no client are active) > - yum versionlock delete glusterfs* > - service glusterd stop > - yum update > - systemctl daemon-reload > - servic...
2017 Jun 28
2
afr-self-heald.c:479:afr_shd_index_sweep
On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com> wrote: > On 06/28/2017 06:52 PM, Paolo Margara wrote: > >> Hi list, >> >> yesterday I noted the following lines into the glustershd.log log file: >> >> [2017-06-28 11:53:05.000890] W [MSGID: 108034] >> [afr-self-heald.c:479:afr_shd_index_sweep] >> 0-iso-images-repo-replicate-0: unable to get index-di...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction is in progre...
2017 Jun 29
0
afr-self-heald.c:479:afr_shd_index_sweep
...-port=49163) I've checked after the restart and indeed now the directory 'entry-changes' is created, but why stopping the glusterd service has not stopped also the brick processes? Now how can I recover from this issue? Restarting all brick processes is enough? Greetings, Paolo Margara Il 28/06/2017 18:41, Pranith Kumar Karampuri ha scritto: > > > On Wed, Jun 28, 2017 at 9:45 PM, Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>> wrote: > > On 06/28/2017 06:52 PM, Paolo Margara wrote: > > Hi list, >...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...134b73 * (node3) virtnode-0-2-gluster: d9047ecd-26b5-467b-8e91-50f76a0c4d16 In this case restarting glusterd on node3 usually solve the issue. What could be the root cause of this behavior? How can I fix this once and for all? If needed I could provide the full log file. Greetings, Paolo Margara -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170720/08a2eecf/attachment.html>
2017 Jun 28
0
afr-self-heald.c:479:afr_shd_index_sweep
On 06/28/2017 06:52 PM, Paolo Margara wrote: > Hi list, > > yesterday I noted the following lines into the glustershd.log log file: > > [2017-06-28 11:53:05.000890] W [MSGID: 108034] > [afr-self-heald.c:479:afr_shd_index_sweep] > 0-iso-images-repo-replicate-0: unable to get index-dir on > iso-images-repo-client-...
2018 Mar 17
1
Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
Hi, this patch it's already available in the community version of gluster 3.12? In which version? If not, there is plan to backport it? Greetings, ??? Paolo Il 16/03/2018 13:24, Atin Mukherjee ha scritto: > Have sent a backport request https://review.gluster.org/19730 at > release-3.10 branch. Hopefully this fix will be picked up in next update. > > On Fri, Mar 16, 2018 at
2018 Apr 12
0
wrong size displayed with df after upgrade to 3.12.6
Dear all, I encountered the same issue, I saw that this is fixed in 3.12.7 but I cannot find this release in the main repo (centos storage SIG), only in the test one. What is the expectation to see this release available into the main repo? Greetings, ??? Paolo Il 09/03/2018 10:41, Stefan Solbrig ha scritto: > Dear Nithya, > > Thank you so much. ?This fixed the problem