Displaying 20 results from an estimated 500 matches similar to: "Possible to bind to multiple addresses?"
2013 Nov 13
1
Disabling NFS causes E level errors in nfs.log (bug 976750)
Hello,
according to the bug 976750
(https://bugzilla.redhat.com/show_bug.cgi?id=976750) problem with
repeating error messages:
[2013-11-13 17:16:11.888894] E [socket.c:2788:socket_connect]
0-management: connection attempt failed (Connection refused)
in case when nfs is disabled on all volumes was suppose to be solved in
3.4.1. We're using the version glusterfs-server-3.4.1-3.el6.x86_64
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I
decided it was not working out so I removed the tier. I now have spam in
the glusterd.log evert 7 seconds:
[2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6
On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote:
> a short while ago I experimented with tiering on one of my volumes. I
> decided it was not working out so I removed the tier. I now have spam in
> the glusterd.log evert 7 seconds:
>
> [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W [socket.c:3292:socket_connect]
2017 Jun 15
1
peer probe failures
Hi,
I'm having a similar issue, were you able to solve it?
Thanks.
Hey all,
I've got a strange problem going on here. I've installed glusterfs-server
on ubuntu 16.04:
glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed]
I can
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2011 Jul 11
0
Instability when using RDMA transport
I've run into a problem with Gluster stability with the RDMA transport. Below is a description of the environment, a simple script that can replicate the problem, and log files from my test system.
I can work around the problem by using the TCP transport over IPoIB but would like some input onto what may be making the RDMA transport fail in this case.
=====
Symptoms
=====
- Error from test
2011 Dec 14
1
glusterfs crash when the one of replicate node restart
Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover
that when one of replicate node reboot and startup the glusterd daemon,the
gluster will crash cause by the other
replicate node cpu usage reach 100%.
Our gluster info:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Options Reconfigured:
performance.cache-size: 3GB
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello ,
it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 configuration. I upgraded the first server and then launched a reboot.
Gluster is not starting. Seems that gluster starts before network layer.
Some logs here:
Thanks
[2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 4, 2017 at 9:26 PM, ismael mondiu <mondiu at hotmail.com> wrote:
> Hello ,
>
> it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3
> configuration. I upgraded the first server and then launched a reboot.
>
>
> Gluster is not starting. Seems that gluster starts before network layer.
>
> Some logs here:
>
>
> Thanks
>
2013 Nov 16
1
Option transport.socket.bind-address fixes
Hello,
glusterd has option transport.socket.bind-address that enables binding
gluster to an exact IP address. This feature is necessary for HA cluster
deployment. In the initial setup everything seemed fine, but after a
little bit of testing I found several issues.
Here are the issues I found:
- NFS and selfheal daemons are started by glusterd but with volserver
hard-coded to
2017 Oct 05
2
Glusterd not working with systemd in redhat 7
Hello Atin,
Please find below the requested informations:
[root at dvihcasc0r ~]# cat /var/lib/glusterd/vols/advdemo/bricks/*
hostname=dvihcasc0r
path=/opt/glusterfs/advdemo
real_path=/opt/glusterfs/advdemo
listen-port=49152
rdma.listen-port=0
decommissioned=0
brick-id=advdemo-client-0
mount_dir=/advdemo
snap-status=0
hostname=dvihcasc0s
path=/opt/glusterfs/advdemo
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
So I have the root cause. Basically as part of the patch we write the
brickinfo->uuid in to the brickinfo file only when there is a change in the
volume. As per the brickinfo files you shared the uuid was not saved as
there is no new change in the volume and hence the uuid was always NULL in
the resolve brick because of which glusterd went for local address
resolution. Having this done with a
2004 Jan 26
1
3com 3c905b - pxe boot failure
Hi,
I'm trying to boot a clean machine(hostname=dgrid-5.srce.hr) with 3com
3c905b NIC (ver4.30 MBA) with pxe.
Server:
hostname: dgrid-1.srce.hr
pxelinux.0: syslinux-2.08
tftp: tftp-hpa-0.36
dhcp server: dhcp-2.0pl5-8
Client:
boot option: DHCP
Client machine successfully gets pxelinux.0 and then everything
stops(see listing below).
I've tried with xinetd-2.3 and
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings,
Legend:
storage-gfs-3-prd - the first gluster.
storage-1-saas - new gluster where "the first gluster" had to be
migrated.
storage-gfs-4-prd - the second gluster (which had to be migrated later).
I've started command replace-brick:
'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared
storage-1-saas:/ydp/shared start'
During that Virtual
2013 Sep 16
0
gluster replace
So I messed up when building a brick on a gluster 3.3.1 filesystem.
Instead of i=512 on the xfs filesystem I set i=256. I realized my
mistake after I had already rebalanced the volume. I wanted to remove
and replace that brick in order to rebuild it properly as it hadn't
failed yet but I knew that it wasn't good to have i=256. So I attempted
to do:
gluster volume replace-brick
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer