Displaying 20 results from an estimated 1000 matches similar to: "Ignore failed connection messages during copying files with tiering"
2017 Nov 04
1
Fwd: Ignore failed connection messages during copying files with tiering
Hi,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
or directory)
W [socket.c:3292:socket_connect]
2017 Oct 18
2
warning spam in the logs after tiering experiment
a short while ago I experimented with tiering on one of my volumes. I
decided it was not working out so I removed the tier. I now have spam in
the glusterd.log evert 7 seconds:
[2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18
2017 Oct 18
0
warning spam in the logs after tiering experiment
forgot to mention Gluster version 3.10.6
On 18 October 2017 at 13:26, Alastair Neil <ajneil.tech at gmail.com> wrote:
> a short while ago I experimented with tiering on one of my volumes. I
> decided it was not working out so I removed the tier. I now have spam in
> the glusterd.log evert 7 seconds:
>
> [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2011 Jul 08
1
Possible to bind to multiple addresses?
I am trying to run GlusterFS on only my internal interfaces. I have
setup two bricks and have a replicated volume that is started.
Everything works fine when I run with no transport.socket.bind-address
defined in the /etc/glusterfs/glusterd.vol file, but when I add it I get:
Transport endpoint is not connected
My configuration looks like this:
volume management
type mgmt/glusterd
2017 Jun 15
1
peer probe failures
Hi,
I'm having a similar issue, were you able to solve it?
Thanks.
Hey all,
I've got a strange problem going on here. I've installed glusterfs-server
on ubuntu 16.04:
glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed]
I can
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Jan 09
2
Blocking IO when hot tier promotion daemon runs
I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
bricks per server distributed replicated volume.
I'm seeing IO get blocked across all client FUSE threads for 10 to 15
seconds while the promotion daemon runs. I see the 'glustertierpro' thread
jump to 99% CPU usage on both boxes when these delays occur and they happen
every 25 minutes (my
2018 Feb 09
1
Tiering Volumns
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?
I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.
[root at
2007 Jun 22
1
Implicit storage tiering w/ ZFS
I''m curious if there has been any discussion of or work done toward implementing storage classing within zpools (this would be similar to the storage foundation QoSS feature).
I''ve searched the forum and inspected the documentation looking for a means to do this, and haven''t found anything, so pardon the post if this is redundant/superfluous.
I would imagine this would
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello ,
it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3 configuration. I upgraded the first server and then launched a reboot.
Gluster is not starting. Seems that gluster starts before network layer.
Some logs here:
Thanks
[2017-10-04 15:33:00.506396] I [MSGID: 106143] [glusterd-pmap.c:277:pmap_registry_bind] 0-pmap: adding brick /opt/glusterfs/advdemo on port
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
Hi,
Can you send the volume info, and volume status output and the tier logs.
And I need to know the size of the files that are being stored.
On Tue, Jan 9, 2018 at 9:51 PM, Tom Fite <tomfite at gmail.com> wrote:
> I've recently enabled an SSD backed 2 TB hot tier on my 150 TB 2 server / 3
> bricks per server distributed replicated volume.
>
> I'm seeing IO get blocked
2017 Oct 05
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 4, 2017 at 9:26 PM, ismael mondiu <mondiu at hotmail.com> wrote:
> Hello ,
>
> it seems the problem still persists on 3.10.6. I have a 1 x (2 + 1) = 3
> configuration. I upgraded the first server and then launched a reboot.
>
>
> Gluster is not starting. Seems that gluster starts before network layer.
>
> Some logs here:
>
>
> Thanks
>
2013 Nov 13
1
Disabling NFS causes E level errors in nfs.log (bug 976750)
Hello,
according to the bug 976750
(https://bugzilla.redhat.com/show_bug.cgi?id=976750) problem with
repeating error messages:
[2013-11-13 17:16:11.888894] E [socket.c:2788:socket_connect]
0-management: connection attempt failed (Connection refused)
in case when nfs is disabled on all volumes was suppose to be solved in
3.4.1. We're using the version glusterfs-server-3.4.1-3.el6.x86_64
2017 Oct 05
2
Glusterd not working with systemd in redhat 7
Hello Atin,
Please find below the requested informations:
[root at dvihcasc0r ~]# cat /var/lib/glusterd/vols/advdemo/bricks/*
hostname=dvihcasc0r
path=/opt/glusterfs/advdemo
real_path=/opt/glusterfs/advdemo
listen-port=49152
rdma.listen-port=0
decommissioned=0
brick-id=advdemo-client-0
mount_dir=/advdemo
snap-status=0
hostname=dvihcasc0s
path=/opt/glusterfs/advdemo