similar to: working dir & logs - configuration

Displaying 20 results from an estimated 600 matches similar to: "working dir & logs - configuration"

2013 Aug 19
0
pumahosting@gmail.com wants to give you 50 points on Perk!
<!DOCTYPE html> <html lang="en"> &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp<head> &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp<meta charset="UTF-8">
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2007 Jul 31
2
Ham Radio s/w and CentOS? {including Echolink}
I am going to present Linux to my ham radio club at some point in the next few months, and wanted to collect info on ham radio software, including options for echolink. Our repeater has an echolink connection, thus if I present the software available for it, it might perk up people's interests more. It would also be very helpful if the echlink applications offered proxy/firewall options
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Sep 18
2
Permission for glusterfs logs.
Hi Team, As you can see permission for the glusterfs logs in /var/log/glusterfs is 600. drwxr-xr-x 3 root root 140 Jan 1 00:00 .. *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks drwxr-xr-x 3 root root 100 Jan 3 20:21 . *-rw------- 1 root root 2102 Jan 3 20:21 etc-glusterfs-glusterd.vol.log* Due to that non-root user is not able to
2018 May 15
0
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
You can still get them from https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ (I don't know how much longer they'll be there. I suggest you copy them if you think you're going to need them in the future.) n 05/15/2018 04:58 AM, Davide Obbi wrote: > hi, > > i noticed that this repo for glusterfs 3.13 does not exists anymore at: > >
2017 Jul 27
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, in attachment all the requested logs. Considering that I'm using gluster as a storage system for oVirt I've checked also these logs and I've seen that almost every commands on all the three nodes are executed by the supervdsm daemon and not only by the SPM node. Could be this the root cause of this problem? Greetings, Paolo PS: could you suggest a better method than
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
Thanks Kaleb, any chance i can make the node working after the downgrade? thanks On Tue, May 15, 2018 at 2:02 PM, Kaleb S. KEITHLEY <kkeithle at redhat.com> wrote: > > You can still get them from > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.13/ > > (I don't know how much longer they'll be there. I suggest you copy them > if you think
2019 Jun 15
2
Dovecot LMTP rejecting mail from address with apostrophe
> On 15 June 2019 10:56 Daniel Lange via dovecot <dovecot at dovecot.org> wrote: > > > Am 15.06.19 um 00:36 schrieb Michal Krzysztofowicz via dovecot: > > Would you know if Dovecot project uses an issue tracker which is publicly available, and which I can check? > > I am not aware of public access to the Open Exchange AG / Dovecot OY > issue tracker. I guess
2017 Sep 18
0
Permission for glusterfs logs.
Any quick suggestion.....? On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com> wrote: > Hi Team, > > As you can see permission for the glusterfs logs in /var/log/glusterfs is > 600. > > drwxr-xr-x 3 root root 140 Jan 1 00:00 .. > *-rw------- 1 root root 0 Jan 3 20:21 cmd_history.log* > drwxr-xr-x 2 root root 40 Jan 3 20:21 bricks
2018 May 15
1
[External] Re: glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
On 05/15/2018 08:08 AM, Davide Obbi wrote: > Thanks Kaleb, > > any chance i can make the node working after the downgrade? > thanks Without knowing what doesn't work, I'll go out on a limb and guess that it's an op-version problem. Shut down your 3.13 nodes, change their op-version to one of the valid 3.12 op-versions (e.g. 31203) and restart. Then the 3.12 nodes should
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post: http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com> wrote: > Additionally the brick log file of the same brick would be required. > Please look for if brick process went down or crashed. Doing a volume start
2018 May 15
2
glusterfs 3.13 repo unavailable and downgrade to 3.12.9 fails
hi, i noticed that this repo for glusterfs 3.13 does not exists anymore at: http://mirror.centos.org/centos/7/storage/x86_64/ i knew was not going to be long term supported however the downgrade to 3.12 breaks the server node i believe the issue is with: *[2018-05-15 08:54:39.981101] E [MSGID: 101019] [xlator.c:503:xlator_init] 0-management: Initialization of volume 'management'
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Jul 26
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Technically if only one node is pumping all these status commands, you shouldn't get into this situation. Can you please help me with the latest cmd_history & glusterd log files from all the nodes? On Wed, Jul 26, 2017 at 1:41 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi Atin, > > I've initially disabled gluster status check on all nodes except on one on
2017 Sep 19
4
Permission for glusterfs logs.
Any suggestion would be appreciated... On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpaliwal at gmail.com> wrote: > Any quick suggestion.....? > > On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpaliwal at gmail.com > > wrote: > >> Hi Team, >> >> As you can see permission for the glusterfs logs in /var/log/glusterfs is >>
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote: > These symptoms appear to be the same as I've recorded in > this post: > > http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html > > On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee > <atin.mukherjee83 at gmail.com > <mailto:atin.mukherjee83 at gmail.com>> wrote: > > Additionally the
2017 Jul 26
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi Atin, I've initially disabled gluster status check on all nodes except on one on my nagios instance as you recommended but this issue happens again. So I've disabled it on each nodes but the error happens again, currently only oVirt is monitoring gluster. I cannot modify this behaviour in the oVirt GUI, there is anything that could I do from the gluster prospective to solve this
2017 Jun 19
0
different brick using the same port?
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang <hiscal at outlook.com> wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2. glusterd log & cmd_history logs from both
2017 May 09
1
Empty info file preventing glusterd from starting
Hi Atin/Team, We are using gluster-3.7.6 with setup of two brick and while restart of system I have seen that the glusterd daemon is getting failed from start. At the time of analyzing the logs from etc-glusterfs.......log file I have received the below logs [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version