search for: cmd_history

Displaying 20 results from an estimated 67 matches for "cmd_history".

2017 Sep 19
4
Permission for glusterfs logs.
...K PALIWAL <abhishpaliwal at gmail.com > > wrote: > >> Hi Team, >> >> As you can see permission for the glusterfs logs in /var/log/glusterfs is >> 600. >> >> drwxr-xr-x 3 root root 140 Jan 1 00:00 .. >> *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* >> drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks >> drwxr-xr-x 3 root root 100 Jan 3 20:21 . >> *-rw------- 1 root root 2102 Jan 3 20:21 etc-glusterfs-glusterd.vol.log* >> >> Due to that non-root user is not able to access these logs files, could >> you...
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above i...
2017 Sep 18
2
Permission for glusterfs logs.
Hi Team, As you can see permission for the glusterfs logs in /var/log/glusterfs is 600. drwxr-xr-x 3 root root 140 Jan 1 00:00 .. *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks drwxr-xr-x 3 root root 100 Jan 3 20:21 . *-rw------- 1 root root 2102 Jan 3 20:21 etc-glusterfs-glusterd.vol.log* Due to that non-root user is not able to access these logs files, could you please let me know how can I change these permission...
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote: > > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 > vols. > What I see,...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've noted a strange behaviour of my gluster storage, > som...
2017 Sep 20
4
[Gluster-devel] Permission for glusterfs logs.
...:abhishpaliwal at gmail.com>> wrote: > > Hi Team, > > As you can see permission for the glusterfs logs in > /var/log/glusterfs is 600. > > drwxr-xr-x 3 root root? 140 Jan? 1 00:00 .. > *-rw------- 1 root root??? 0 Jan? 3 20:21 cmd_history.log* > drwxr-xr-x 2 root root?? 40 Jan? 3 20:21 bricks > drwxr-xr-x 3 root root? 100 Jan? 3 20:21 . > *-rw------- 1 root root 2102 Jan? 3 20:21 > etc-glusterfs-glusterd.vol.log* > > Due to that non-root user is not able to access these logs...
2017 Sep 13
2
one brick one volume process dies?
...brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > >> >> >> On 13/09/17 06:21, Gaurav Yadav wrote: >> > Please provide the output of gluster volume info, gluster volume status >>> and gluster peer status. >...
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on > my nagios instance as you recommended but this issue ha...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status...
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...Paolo PS: could you suggest a better method than attachment for sharing log files? Il 26/07/2017 15:28, Atin Mukherjee ha scritto: > Technically if only one node is pumping all these status commands, you > shouldn't get into this situation. Can you please help me with the > latest cmd_history & glusterd log files from all the nodes? > > On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi Atin, > > I've initially disabled gluster status check on all nodes except &g...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all nodes except on one, I'll check if this is enough. Thanks, Paolo Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > So from the cmd_history.logs across all the nodes it's evident that > multiple commands on the same volume are run simultaneously which can > result into transactions collision and you can end up with one command > succeeding and others failing. Ideally if you are running volume > status command for monito...
2017 Sep 18
0
Permission for glusterfs logs.
...Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com> wrote: > Hi Team, > > As you can see permission for the glusterfs logs in /var/log/glusterfs is > 600. > > drwxr-xr-x 3 root root 140 Jan 1 00:00 .. > *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* > drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks > drwxr-xr-x 3 root root 100 Jan 3 20:21 . > *-rw------- 1 root root 2102 Jan 3 20:21 etc-glusterfs-glusterd.vol.log* > > Due to that non-root user is not able to access these logs files, could > you please let me know how...
2017 Sep 13
0
one brick one volume process dies?
...brick would be required. > Please look for if brick process went down or crashed. Doing a volume start > force should resolve the issue. > > On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > >> Please send me the logs as well i.e glusterd.logs and cmd_history.log. >> >> >> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: >> >>> >>> >>> On 13/09/17 06:21, Gaurav Yadav wrote: >>> >> Please provide the output of gluster volume info, gluster volume status >&g...
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
...17 15:37, Paolo Margara ha scritto: > > OK, on my nagios instance I've disabled gluster status check on all > nodes except on one, I'll check if this is enough. > > Thanks, > > Paolo > > > Il 20/07/2017 13:50, Atin Mukherjee ha scritto: >> So from the cmd_history.logs across all the nodes it's evident that >> multiple commands on the same volume are run simultaneously which can >> result into transactions collision and you can end up with one >> command succeeding and others failing. Ideally if you are running >> volume status co...
2017 May 31
2
"Another Transaction is in progres..."
Hi all, I am trying to do trivial things, like setting quota, or just querying the status and keep getting "Another transaction is in progres for <some volume>" These messages pop up, then disappear for a while, then pop up again... What do these messages mean? How do I figure out which "transaction" is meant here, and what do I do about it? Krist -- Vriendelijke
2017 Sep 28
1
one brick one volume process dies?
...Brick 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA .... .... > On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav > <gyadav at redhat.com <mailto:gyadav at redhat.com>> wrote: > > Please send me the logs as well i.e glusterd.logs > and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> > wrote: > > > > On 13/09/17 06:21, Gaurav Yadav wrote: > > Please provide the output of g...
2017 Jun 19
0
different brick using the same port?
...; wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2. glusterd log & cmd_history logs from both the nodes 3. If you are using latest gluster release (3.11) then glusterd statedump output by executing # kill -SIGUSR1 $(pidof glusterd) the file will be available in /var/run/gluster > > Status of volume: home-rabbitmq-qa > > Gluster process...
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that multiple commands on the same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's sugges...
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list, recently I've noted a strange behaviour of my gluster storage, sometimes while executing a simple command like "gluster volume status vm-images-repo" as a response I got "Another transaction is in progress for vm-images-repo. Please try again after sometime.". This situation does not get solved simply waiting for but I've to restart glusterd on the node that
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and gluster peer status. Apart from above info, please provide glusterd logs, cmd_history.log. Thanks Gaurav On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > hi everyone > > I have 3-peer cluster with all vols in replica mode, 9 vols. > What I see, unfortunately, is one brick fails in one vol, when it happens > it's always the same...