similar to: How to delete geo-replication session?

Displaying 20 results from an estimated 7000 matches similar to: "How to delete geo-replication session?"

2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2017 Aug 07
0
How to delete geo-replication session?
Hi, I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation... Can someone let me know how I can delete it? Thanks > -------- Original Message -------- > Subject: How to delete geo-replication session? > Local Time: August 1, 2017 12:15 PM > UTC Time: August
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail. Please perform the following steps once a new node is added - Run gsec create command again gluster system:: execute gsec_create - Run Geo-rep create command with force and run start force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> create push-pem force gluster volume geo-replication <mastervol>
2017 Jul 29
2
Not possible to stop geo-rep after adding arbiter to replica 2
Hello To my two node replica volume I have added an arbiter node for safety purpose. On that volume I also have geo replication running and would like to stop it is status "Faulty" and keeps trying over and over to sync without success. I am using GlusterFS 3.8.11. So in order to stop geo-rep I use: gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo stop but it
2017 Jul 29
1
Not possible to stop geo-rep after adding arbiter to replica 2
I managed to force stopping geo replication using the "force" parameter after the "stop" but there are still other issues related to the fact that my geo replication setup was created before I added the additional arbiter node to my replca. For example when I would like to stop my volume I simply can't and I get the following error: volume stop: myvolume: failed: Staging
2017 Jul 29
0
Not possible to stop geo-rep after adding arbiter to replica 2
Adding Rahul and Kothresh who are SME on geo replication Thanks & Regards Karan Sandha On Sat, Jul 29, 2017 at 3:37 PM, mabi <mabi at protonmail.ch> wrote: > Hello > > To my two node replica volume I have added an arbiter node for safety > purpose. On that volume I also have geo replication running and would like > to stop it is status "Faulty" and keeps
2017 Aug 16
2
Manually delete .glusterfs/changelogs directory ?
Hello, I just deleted (permanently) my geo-replication session using the following command: gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete and noticed that the .glusterfs/changelogs on my volume still exists. Is it safe to delete the whole directly myself with "rm -rf .glusterfs/changelogs" ? As far as I understand the CHANGELOG.* files are only needed
2017 Jun 30
1
How to deal with FAILURES count in geo rep
Hello, I have a replica 2 with a remote slave node for geo-replication (GlusterFS 3.8.11 on Debian 8) and saw for the first time a non zero number in the FAILURES column when running: gluster volume geo-replcation myvolume remotehost:remotevol status detail Right now the number under the FAILURES column is 32 and have a few questions regarding how to deal with that: - first what does 32 mean? is
2017 Aug 30
0
Manually delete .glusterfs/changelogs directory ?
Hi, has anyone any advice to give about my question below? Thanks! > -------- Original Message -------- > Subject: Manually delete .glusterfs/changelogs directory ? > Local Time: August 16, 2017 5:59 PM > UTC Time: August 16, 2017 3:59 PM > From: mabi at protonmail.ch > To: Gluster Users <gluster-users at gluster.org> > > Hello, > > I just deleted (permanently)
2017 Aug 31
2
Manually delete .glusterfs/changelogs directory ?
Hi Mabi, If you will not use that geo-replication volume session again, I believe it is safe to manually delete the files in the brick directory using rm -rf. However, the gluster documentation specifies that if the session is to be permanently deleted, this is the command to use: gluster volume geo-replication gv1 snode1::gv2 delete reset-sync-time
2017 Aug 01
2
Quotas not working after adding arbiter brick to replica 2
Hello, As you might have read in my previous post on the mailing list I have added an arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some healing issues and help of Ravi that could get fixed but now I just noticed that my quotas are all gone. When I run the following command: glusterfs volume quota myvolume list There is no output... In the /var/log/glusterfs/quotad.log I can see the
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon, Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1. So what I now
2017 Aug 02
2
Quotas not working after adding arbiter brick to replica 2
Mabi, We have fixed a couple of issues in the quota list path. Could you also please attach the quota.conf file (/var/lib/glusterd/vols/ patchy/quota.conf) (Ideally, the first few bytes would be ascii characters followed by 17 bytes per directory on which quota limit is set) Regards, Sanoj On Tue, Aug 1, 2017 at 1:36 PM, mabi <mabi at protonmail.ch> wrote: > I also just noticed quite
2017 Aug 01
0
Quotas not working after adding arbiter brick to replica 2
I also just noticed quite a few of the following warning messages in the quotad.log log file: [2017-08-01 07:59:27.834202] W [MSGID: 108027] [afr-common.c:2496:afr_discover_done] 0-myvolume-replicate-0: no read subvols for (null) > -------- Original Message -------- > Subject: [Gluster-users] Quotas not working after adding arbiter brick to replica 2 > Local Time: August 1, 2017 8:49 AM
2017 Aug 06
2
State: Peer Rejected (Connected)
Hi, I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and this night one of my nodes (node1) had an out of memory for some unknown reason and as such the Linux OOM killer has killed the glusterd and glusterfs process. I restarted the glusterd process but now that node is in "Peer Rejected" state from the other nodes and from itself it rejects the two other nodes
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >