search for: cmd_histori

Displaying 20 results from an estimated 67 matches for "cmd_histori".

Did you mean: cmd_history
2017 Sep 19
4
Permission for glusterfs logs.
Any suggestion would be appreciated... On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote: > Any quick suggestion.....? > > On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com > > wrote: > >> Hi Team, >> >> As you can see permission for the glusterfs logs in /var/log/glusterfs is >>
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 18
2
Permission for glusterfs logs.
Hi Team, As you can see permission for the glusterfs logs in /var/log/glusterfs is 600. drwxr-xr-x 3 root root 140 Jan 1 00:00 .. *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks drwxr-xr-x 3 root root 100 Jan 3 20:21 . *-rw------- 1 root root 2102 Jan 3 20:21 etc-glusterfs-glusterd.vol.log* Due to that non-root user is not able to
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2017 Sep 20
4
[Gluster-devel] Permission for glusterfs logs.
On 09/18/2017 09:22 PM, ABHISHEK PALIWAL wrote: > Any suggestion would be appreciated... > > On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com > <mailto:abhishpaliwal at gmail.com>> wrote: > > Any quick suggestion.....? > > On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL > <abhishpaliwal at gmail.com
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, in attachment all the requested logs. Considering that I'm using gluster as a storage system for oVirt I've checked also these logs and I've seen that almost every commands on all the three nodes are executed by the supervdsm daemon and not only by the SPM node. Could be this the root cause of this problem? Greetings, Paolo PS: could you suggest a better method than
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all nodes except on one, I'll check if this is enough. Thanks, Paolo Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > So from the cmd_history.logs across all the nodes it's evident that > multiple commands on the same volume are run simultaneously which can > result into transactions collision and you can
2017 Sep 18
0
Permission for glusterfs logs.
Any quick suggestion.....? On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com> wrote: > Hi Team, > > As you can see permission for the glusterfs logs in /var/log/glusterfs is > 600. > > drwxr-xr-x 3 root root 140 Jan 1 00:00 .. > *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* > drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, I've initially disabled gluster status check on all nodes except on one on my nagios instance as you recommended but this issue happens again. So I've disabled it on each nodes but the error happens again, currently only oVirt is monitoring gluster. I cannot modify this behaviour in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this
2017 May 31
2
"Another Transaction is in progres..."
Hi all, I am trying to do trivial things, like setting quota, or just querying the status and keep getting "Another transaction is in progres for <some volume>" These messages pop up, then disappear for a while, then pop up again... What do these messages mean? How do I figure out which "transaction" is meant here, and what do I do about it? Krist -- Vriendelijke
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2017 Jun 19
0
different brick using the same port?
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2. glusterd log & cmd_history logs from both
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that multiple commands on the same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list, recently I've noted a strange behaviour of my gluster storage, sometimes while executing a simple command like "gluster volume status vm-images-repo" as a response I got "Another transaction is in progress for vm-images-repo. Please try again after sometime.". This situation does not get solved simply waiting for but I've to restart glusterd on the node that
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and gluster peer status. Apart from above info, please provide glusterd logs, cmd_history.log. Thanks Gaurav On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 vols. > What I see, unfortunately, is one brick