Displaying 20 results from an estimated 4000 matches similar to: "How to deal with FAILURES count in geo rep"
2017 Jul 28
0
How to deal with FAILURES count in geo rep
Can anyone tell me how to find out what is going wrong here? I the meantime I have 272 FAILURES count and I can't find anything in the GlusterFS documentation on how to troubleshoot the FAILURES count in geo-replication.
Thank you.
> -------- Original Message --------
> Subject: How to deal with FAILURES count in geo rep
> Local Time: June 30, 2017 8:32 PM
> UTC Time: June 30,
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2017 Aug 07
0
How to delete geo-replication session?
Hi,
I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation...
Can someone let me know how I can delete it?
Thanks
> -------- Original Message --------
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2017 Aug 01
3
How to delete geo-replication session?
Hi,
I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below:
$ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete
Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2017 Jul 29
2
Not possible to stop geo-rep after adding arbiter to replica 2
Hello
To my two node replica volume I have added an arbiter node for safety purpose. On that volume I also have geo replication running and would like to stop it is status "Faulty" and keeps trying over and over to sync without success. I am using GlusterFS 3.8.11.
So in order to stop geo-rep I use:
gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo stop
but it
2017 Jul 29
0
Not possible to stop geo-rep after adding arbiter to replica 2
Adding Rahul and Kothresh who are SME on geo replication
Thanks & Regards
Karan Sandha
On Sat, Jul 29, 2017 at 3:37 PM, mabi <mabi at protonmail.ch> wrote:
> Hello
>
> To my two node replica volume I have added an arbiter node for safety
> purpose. On that volume I also have geo replication running and would like
> to stop it is status "Faulty" and keeps
2017 Jul 29
1
Not possible to stop geo-rep after adding arbiter to replica 2
I managed to force stopping geo replication using the "force" parameter after the "stop" but there are still other issues related to the fact that my geo replication setup was created before I added the additional arbiter node to my replca.
For example when I would like to stop my volume I simply can't and I get the following error:
volume stop: myvolume: failed: Staging
2017 Aug 03
2
Quotas not working after adding arbiter brick to replica 2
I tried to re-create manually my quotas but not even that works now. Running the "limit-usage" command as showed below returns success:
$ sudo gluster volume quota myvolume limit-usage /userdirectory 50GB
volume quota : success
but when I list the quotas using "list" nothing appears.
What can I do to fix that issue with the quotas?
> -------- Original Message --------
>
2017 Aug 02
2
Quotas not working after adding arbiter brick to replica 2
Mabi,
We have fixed a couple of issues in the quota list path.
Could you also please attach the quota.conf file (/var/lib/glusterd/vols/
patchy/quota.conf)
(Ideally, the first few bytes would be ascii characters followed by 17
bytes per directory on which quota limit is set)
Regards,
Sanoj
On Tue, Aug 1, 2017 at 1:36 PM, mabi <mabi at protonmail.ch> wrote:
> I also just noticed quite
2017 Aug 04
1
Quotas not working after adding arbiter brick to replica 2
Thank you very much Sanoj, I ran your script once and it worked. I now have quotas again...
Question: do you know in which release this issue will be fixed?
> -------- Original Message --------
> Subject: Re: [Gluster-users] Quotas not working after adding arbiter brick to replica 2
> Local Time: August 4, 2017 3:28 PM
> UTC Time: August 4, 2017 1:28 PM
> From: sunnikri at
2017 Aug 04
0
Quotas not working after adding arbiter brick to replica 2
Hi mabi,
This is a likely issue where the last gfid entry in the quota.conf file is
stale (because the directory was deleted with quota limit on it being
removed)
(https://review.gluster.org/#/c/16507/)
To fix the issue, we need to remove the last entry (last 17 bytes/ 16bytes
based on quota version) in the file.
Please use the below work around for the same until next upgrade.
you only need to
2017 Aug 01
2
Quotas not working after adding arbiter brick to replica 2
Hello,
As you might have read in my previous post on the mailing list I have added an arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some healing issues and help of Ravi that could get fixed but now I just noticed that my quotas are all gone.
When I run the following command:
glusterfs volume quota myvolume list
There is no output...
In the /var/log/glusterfs/quotad.log I can see the
2017 Aug 02
0
Quotas not working after adding arbiter brick to replica 2
Hi Sanoj,
I copied over the quota.conf file from the affected volume (node 1) and opened it up with a hex editor but can not recognize anything really except for the first few header/version bytes. I have attached it within this mail (compressed with bzip2) as requested.
Should I recreate them manually? there where around 10 of them. Or is there a hope of recovering these quotas?
Regards,
M.
>
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first:
- In order to launch the index heal is the following command correct:
gluster volume heal myvolume
- If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking.
> --------
2017 Aug 01
0
Quotas not working after adding arbiter brick to replica 2
I also just noticed quite a few of the following warning messages in the quotad.log log file:
[2017-08-01 07:59:27.834202] W [MSGID: 108027] [afr-common.c:2496:afr_discover_done] 0-myvolume-replicate-0: no read subvols for (null)
> -------- Original Message --------
> Subject: [Gluster-users] Quotas not working after adding arbiter brick to replica 2
> Local Time: August 1, 2017 8:49 AM
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote:
> Thanks for the additional hints, I have the following 2 questions first:
>
> - In order to launch the index heal is the following command correct:
> gluster volume heal myvolume
>
Yes
> - If I run a "volume start force" will it have any short disruptions
> on my clients which mount the volume through FUSE? If yes, how long?
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output:
Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node2.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
Brick node3.domain.tld:/srv/glusterfs/myvolume/brick
Status: Connected
Number of entries in split-brain: 0
> -------- Original
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15:
https://bugzilla.redhat.com/show_bug.cgi?id=1471613
Is it possible that the problem I described in this post is related to that bug?
> -------- Original Message --------
> Subject: Re: [Gluster-users] self-heal not working
> Local Time: August 22, 2017 11:51 AM
> UTC Time: August 22, 2017 9:51 AM
> From: ravishankar at