search for: slavehosts

Displaying 10 results from an estimated 10 matches for "slavehosts".

Did you mean: slavehost
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail. Please perform the following steps once a new node is added - Run gsec create command again gluster system:: execute gsec_create - Run Geo-rep create command with force and run start force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> create push-pem force gluster volume geo-replication <mastervol>
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output.
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Apr 23
0
Geo-replication faulty
...x (2 + 1) = 3 After checking logs I see that the master node has the following error: OSError: Permission denied Looking at the slave I have the following error: remote operation failed. Path: <gfid:7c6a232d-c74c-40d7-b6bd-c092fc1169f7>/anvil [Permission denied] I restarted glusterd on all slavehosts. After this I got new errors. Master node: RepceClient: call failed on peer call=26487:140016890697536:1524473494.25 method=entry_ops error=OSError glusterfs session went down error=ENOTCONN Client node: Found anomalies in (null) (gfid = 982d5d7d-2a53-4b21-8ad7-d658810d554c)...
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Feb 07
1
geo-replication command rsync returned with 3
Hi, ? Kotresh workaround works for me. But before I tried it, I created some strace-logs for Florian. setup: 2 VMs?(192.168.222.120 master, 192.168.222.121 slave), both with a volume named vol with Ubuntu?16.04.3,?glusterfs 3.13.2, rsync 3.1.1 . ? Best regards, Tino ? root at master:~# cat /usr/bin/rsync #!/bin/bash strace -o /tmp/rsync.trace -ff /usr/bin/rsynco "$@" ? One of the traces