similar to: warning spam in the logs after tiering experiment

Displaying 20 results from an estimated 1100 matches similar to: "warning spam in the logs after tiering experiment"

2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6 On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote: > a short while ago I experimented with tiering on one of my volumes. I > decided it was not working out so I removed the tier. I now have spam in > the glusterd.log evert 7 seconds: > > [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Nov 03
1
Ignore failed connection messages during copying files with tiering
Hi, All, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi, We create a GlusterFS cluster with tiers. The hot tier is distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume. When copy millions of files to the cluster, we find these logs: W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file or directory) W [socket.c:3292:socket_connect]
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first that free disk space is available for the volume. On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote: > Herb, > What are the high and low watermarks for the tier set at ? > > # gluster volume get <vol> cluster.watermark-hi > > # gluster volume get
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2011 Jul 08
1
Possible to bind to multiple addresses?
I am trying to run GlusterFS on only my internal interfaces. I have setup two bricks and have a replicated volume that is started. Everything works fine when I run with no transport.socket.bind-address defined in the /etc/glusterfs/glusterd.vol file, but when I add it I get: Transport endpoint is not connected My configuration looks like this: volume management type mgmt/glusterd
2017 Jun 15
1
peer probe failures
Hi, I'm having a similar issue, were you able to solve it? Thanks. Hey all, I've got a strange problem going on here. I've installed glusterfs-server on ubuntu 16.04: glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic] glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed] I can
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All, In a two node glusterfs setup, with one node down, can't use the second node to mount the volume. I understand this is expected behaviour? Anyway to allow the secondary node to function then replicate what changed to the first (primary) when it's back online? Or should I just go for a third node to allow for this? Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi, You need 3 nodes at least to have quorum enabled. In 2 node setup you need to disable quorum so as to be able to still use the volume when one of the nodes go down. On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote: > Hey All, > > In a two node glusterfs setup, with one node down, can't use the second > node to mount the volume. I understand this is
2017 Jun 30
2
Multi petabyte gluster
We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the rebuild time are bottlenecked by matrix operations which scale as the square of the number of data stripes. There are some savings because of larger data chunks but we ended up using 8+3 and heal times are about half compared to 16+3. -Alastair On 30 June 2017 at 02:22, Serkan ?oban <cobanserkan at gmail.com>
2017 Jun 30
0
Multi petabyte gluster
Did you test healing by increasing disperse.shd-max-threads? What is your heal times per brick now? On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech at gmail.com> wrote: > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as the > rebuild time are bottlenecked by matrix operations which scale as the square > of the number of data stripes. There are
2009 Feb 28
4
possibly a stupid question, why can I not set sharenfs="sec=krb5, rw"?
x86 snv 108 I have a pool with around 5300 file systems called home. I can do: zfs set sharenfs=on home however zfs set sharenfs="sec=krb5,rw" home complains: cannot set property for ''home'': ''sharenfs'' cannot be set to invalid options I feel I must be overlooking something elementary. Thanks, Alastair -------------- next part --------------
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello , it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 configuration. I upgraded the first server and then launched a reboot. Gluster is not starting. Seems that gluster starts before network layer. Some logs here: Thanks [2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder to achieve than with just replica 2 + arbiter. On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi Neil, docs mention two live nodes of replica 3 blaming each other and > refusing to do IO. > > https://gluster.readthedocs.io/en/latest/Administrator% >
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 4, 2017 at 9:26 PM, ismael mondiu <mondiu at hotmail.com> wrote: > Hello , > > it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 > configuration. I upgraded the first server and then launched a reboot. > > > Gluster is not starting. Seems that gluster starts before network layer. > > Some logs here: > > > Thanks >
2013 Nov 13
1
Disabling NFS causes E level errors in nfs.log (bug 976750)
Hello, according to the bug 976750 (https://bugzilla.redhat.com/show_bug.cgi?id=976750) problem with repeating error messages: [2013-11-13 17:16:11.888894] E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) in case when nfs is disabled on all volumes was suppose to be solved in 3.4.1. We're using the version glusterfs-server-3.4.1-3.el6.x86_64
2017 Oct 05
2
Glusterd not working with systemd in redhat 7
Hello Atin, Please find below the requested informations: [root at dvihcasc0r ~]# cat /var/lib/glusterd/vols/advdemo/bricks/* hostname=dvihcasc0r path=/opt/glusterfs/advdemo real_path=/opt/glusterfs/advdemo listen-port=49152 rdma.listen-port=0 decommissioned=0 brick-id=advdemo-client-0 mount_dir=/advdemo snap-status=0 hostname=dvihcasc0s path=/opt/glusterfs/advdemo