search for: slavevol

Displaying 9 results from an estimated 9 matches for "slavevol".

2017 Oct 05
0
Inconsistent slave status output
...LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- foo-gluster-srv1 gv0 /var/mnt/gluster/brick2 root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3 Active Changelog Crawl 2017-10-04 11:04:27 foo-gluster-srv2 gv0 /var/mnt/gluster/brick root ssh://foo-gluster-srv3::slavevol foo-gluster-srv3 Passive N/A N/A foo-gluster-srv1 gv0 /var/mnt/gluster/brick2...
2017 Aug 08
1
How to delete geo-replication session?
...ssed your previous mail. Please perform the following steps once a new node is added - Run gsec create command again gluster system:: execute gsec_create - Run Geo-rep create command with force and run start force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> create push-pem force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> start force With these steps you will be able to stop/delete the Geo-rep session. I will add these steps in the documentation page(http://gluster.readthedocs.io/en/latest/Adminis...
2018 Feb 07
0
add geo-replication "passive" node after node replacement
...ded to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that while removing brick and adding a brick, you should make sure the data from the brick being removed is synced to slave. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 4:21 PM, Stefano Bagnara <lists at bago.org> wrote: &gt...
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output. There might be cases where the mount points not get cleaned up when worker...
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Feb 07
1
geo-replication command rsync returned with 3
...t;Gluster Users" <gluster-users at gluster.org> Betreff:?Re: [Gluster-users] geo-replication command rsync returned with 3 Hi, ?As a quick workaround for geo-replication to work. Please configure the following option. ?gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true ?The above option will not do the lazy umount and as a result, all the master and slave volume mountsmaintained by geo-replication can be accessed by others. It's also visible in df output.There might be cases where the mount points not get cleaned up when worker go...