Displaying 20 results from an estimated 53 matches for "afr_notifi".
Did you mean:
afr_notify
2017 Oct 19
3
gluster tiering errors
All,
I am new to gluster and have some questions/concerns about some tiering
errors that I see in the log files.
OS: CentOs 7.3.1611
Gluster version: 3.10.5
Samba version: 4.6.2
I see the following (scrubbed):
Node 1 /var/log/glusterfs/tier/<vol>/tierd.log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed
2017 Oct 22
0
gluster tiering errors
Herb,
What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
# gluster volume get <vol> cluster.watermark-low
What is the size of the file that failed to migrate as per the following
tierd log:
[2017-10-19 17:52:07.519614] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
failed for
2017 Oct 22
1
gluster tiering errors
There are several messages "no space left on device". I would check first
that free disk space is available for the volume.
On Oct 22, 2017 18:42, "Milind Changire" <mchangir at redhat.com> wrote:
> Herb,
> What are the high and low watermarks for the tier set at ?
>
> # gluster volume get <vol> cluster.watermark-hi
>
> # gluster volume get
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response..
>> What are the high and low watermarks for the tier set at ?
# gluster volume get <vol> cluster.watermark-hi
Option Value
------ -----
cluster.watermark-hi 90
# gluster volume get <vol> cluster.watermark-low
Option
2017 Oct 27
0
gluster tiering errors
Herb,
I'm trying to weed out issues here.
So, I can see quota turned *on* and would like you to check the quota
settings and test to see system behavior *if quota is turned off*.
Although the file size that failed migration was 29K, I'm being a bit
paranoid while weeding out issues.
Are you still facing tiering errors ?
I can see your response to Alex with the disk space consumption and
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
When I do a NFS mount and do a ls I get:
[root at ovirt share]# ls
ls: reading directory .: Too many levels of symbolic links
[root at ovirt share]# ls -fl
ls: reading directory .: Too many levels of symbolic links
total 3636
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096 Jun 21 19:29 ..
drwxr-xr-x 3 root root 16384 Jun 21 19:34 .
dr-xr-xr-x. 21 root root 4096
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2011 Aug 24
1
Input/output error
Hi, everyone.
Its nice meeting you.
I am poor at English....
I am writing this because I'd like to update GlusterFS to 3.2.2-1,and I want
to change from gluster mount to nfs mount.
I have installed GlusterFS 3.2.1 one week ago,and replication 2 server.
OS:CentOS5.5 64bit
RPM:glusterfs-core-3.2.1-1
glusterfs-fuse-3.2.1-1
command
gluster volume create syncdata replica 2 transport tcp
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking into the gluster mailing list i found
a very similar case in
http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
but i'm not sure if this issue is fixed...
even this outage happened on glusterfs 3.7.18 which gets no more updates
since ~.20 i would kindly ask
2012 Jan 04
0
FUSE init failed
Hi,
I'm having an issue using the GlusterFS native client.
After doing a mount the filesystem appears mounted but any operation
results in a
Transport endpoint is not connected
message
gluster peer status and volume info don't complain.
I've copied the mount log below which mentions an error at fuse_init.
The kernel is based on 2.6.15 and FUSE api version is 7.3.
I'm using
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi,
I'm running glusterFS 3.3.1 on Centos 6.4.
? Gluster volume status
Status of volume: glustervol
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031
Brick KWTOCUATGS002:/mnt/cloudbrick
2018 Jan 23
2
Understanding client logs
Hi all,
I have problem pin pointing an error, that users of
my system experience processes that crash.
The thing that have changed since the craches started
is that I added a gluster cluster.
Of cause the users start to attack my gluster cluster.
I started looking at logs, starting from the client side.
I just need help to understand how to read it in the right way.
I can see that every ten
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody.
I have a problem setting up gluster failover funcionality. Based on
manual i setup ucarp which is working well ( tested with ping/ssh etc
)
But when i use virtual address for gluster volume mount and i turn off
one of nodes machine/gluster will freeze until node is back online.
My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In
gluster log i can see:
[2011-06-06
2011 Feb 09
0
Removing two bricks
This has been tested using the most recent build of 3.1.2 (built Jan 18 2011 11:19:54)
System setup:
Volume Name: brick
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: linguest2:/data/exp
Brick2: linguest3:/data/exp
Brick3: linguest4:/data/exp
Brick4: linguest5:/data/exp
This scenario is to remove both linguest4, linguest5 from the