Displaying 20 results from an estimated 1000 matches similar to: "Glusterd not working with systemd in redhat 7"
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com>
wrote:
> Hi!
> I am having same issue but I am running Ubuntu v16.04.
> It does not mount during boot, but works if I mount it manually. I am
> running the Gluster-server on the same machines (3 machines)
> Here is the /tc/fstab file
>
> /dev/sdb1 /data/gluster ext4 defaults 0 0
>
>
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi!
Please see bellow. Note that web1.dasilva.network is the address of the
local machine where one of the bricks is installed and that ties to mount.
[2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
[2017-08-20 20:30:40.973249] I [MSGID: 106478]
2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello
I have a large cluster in which every node is logging:
I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR -
disconnecting now
At a rate of of around 4 or 5 per second per node, which is adding up to a
lot of messages. This seems to happen while my cluster is idle.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2017 Jun 15
1
peer probe failures
Hi,
I'm having a similar issue, were you able to solve it?
Thanks.
Hey all,
I've got a strange problem going on here. I've installed glusterfs-server
on ubuntu 16.04:
glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed]
I can
2018 Mar 21
2
Brick process not starting after reinstall
Hi all,
our systems have suffered a host failure in a replica three setup.
The host needed a complete reinstall. I followed the RH guide to
'replace a host with the same hostname'
(https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-replacing_hosts).
The machine has the same OS (CentOS 7). The new machine got a minor
version number newer
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote:
> Hello
>
> I have a large cluster in which every node is logging:
>
> I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR -
> disconnecting now
>
> At a rate of of around 4 or 5 per second per node, which is adding up to a
> lot of messages. This seems to happen while my
2018 Mar 21
0
Brick process not starting after reinstall
Could you share the following information:
1. gluster --version
2. output of gluster volume status
3. glusterd log and all brick log files from the node where bricks didn't
come up.
On Wed, Mar 21, 2018 at 12:35 PM, Richard Neuboeck <hawk at tbi.univie.ac.at>
wrote:
> Hi all,
>
> our systems have suffered a host failure in a replica three setup.
> The host needed a
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
Hi,
I have a distributed volume which runs on Fedora 26 systems with
glusterfs 3.11.2 from gluster.org repos:
----------
[root at taupo ~]# glusterd --version
glusterfs 3.11.2
gluster> volume info gv2
Volume Name: gv2
Type: Distribute
Volume ID: 6b468f43-3857-4506-917c-7eaaaef9b6ee
Status: Started
Snapshot Count: 0
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1:
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote:
> Ji-Hyeon,
>
> You're saying that "stripe=2 transport=rdma" should work. Ok, that
> was firstly I wanted to know. I'll put together logs later this week.
Note that "stripe" is not tested much and practically unmaintained. We
do not advise you to use it. If you have large files that you
2017 Sep 13
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
I ran into something like this in 3.10.4 and filed two bugs for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060
Please see the above bugs for full detail.
In summary, my issue was related to glusterd's pid handling of pid files
when is starts self-heal and bricks. The issues are:
a. brick pid file leaves stale pid and brick fails
2012 Sep 28
1
blank plot----how do I make symbols appear
Hi,
I am trying to create a scatterplot, coding each point to one of 5
populations. I was successful when I did this for one set of data, yet
when I try plotting other data a blank plot appears (although the axes are
labelled and I can fit the regression lines from each population). I have
tried a variety of things to fix this but nothing seems to work.
I can plot the points if I do not
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained.
Ah, this was what I suspected. Understood. I'll be happy with "shard".
Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client.
I looked into logs. I paste lengthy logs below with
2017 Jun 18
0
gluster peer probe failing
Hi,
Below please find the reserved ports and log, thanks.
sysctl net.ipv4.ip_local_reserved_ports:
net.ipv4.ip_local_reserved_ports = 30000-32767
glusterd.log:
[2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007
[2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2017 Jun 20
2
gluster peer probe failing
Hi,
I have tried on my host by setting corresponding ports, but I didn't see
the issue on my machine locally.
However with the logs you have sent it is prety much clear issue is related
to ports only.
I will trying to reproduce on some other machine. Will update you as s0on
as possible.
Thanks
Gaurav
On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote:
>
2017 Jun 20
0
gluster peer probe failing
Hi,
I am able to recreate the issue and here is my RCA.
Maximum value i.e 32767 is being overflowed while doing manipulation on it
and it was previously not taken care properly.
Hence glusterd was crashing with SIGSEGV.
Issue is being fixed with "
https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported
as well.
Thanks
Gaurav
On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav!
1. Any time estimation on to when this fix would be released?
2. Any recommended workaround?
Best,
Guy.
From: Gaurav Yadav [mailto:gyadav at redhat.com]
Sent: Tuesday, June 20, 2017 9:46 AM
To: Guy Cukierman <guyc at elminda.com>
Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] gluster peer probe failing
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl
net.ipv4.ip_local_reserved_ports".
Apart from output of command please send the logs to look into the issue.
Thanks
Gaurav
On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
> +Gaurav, he is the author of the patch, can you please comment here?
>
>
> On Thu, Jun 15, 2017 at 3:28
2018 Sep 07
3
Auth process sometimes stop responding after upgrade
In data venerd? 7 settembre 2018 10:06:00 CEST, Sami Ketola ha scritto:
> > On 7 Sep 2018, at 11.00, Simone Lazzaris <s.lazzaris at interactive.eu>
> > wrote:
> >
> >
> > The only suspect thing is this:
> >
> > Sep 6 14:45:41 imap-front13 dovecot: director: doveadm: Host
> > 192.168.1.142
> > vhost count changed from 100 to 0
>
2011 Mar 11
4
Server locking up everyday around 3:30 AM - (INFO: task wget:13608 blocked for more than 120 seconds) need sleep, help.
This may or may not be CentOS related, but am out of ideas at this
point and wanted to bounce this off the list.
I'm running a CentOS 5.5 server, running the latest kernel 2.6.18-194.32.1.el5.
Almost everyday around 3:30 AM the server completely locks up and has
to be power cycled before it will come back online.
(this means someone hat to wake up and reboot the server, oh how I
love being
2019 Mar 08
1
Dovecot v2.3.5 released
On 7.3.2019 23.37, A. Schulze via dovecot wrote:
>
> Am 07.03.19 um 17:33 schrieb Aki Tuomi via dovecot:
>
>>> test-http-client-errors.c:2989: Assert failed: FALSE
>>> connection timed out ................................................. : FAILED
> Hello Aki,
>
>> Are you running with valgrind or on really slow system?
> I'm not aware my buildsystem