search for: slavehost

Displaying 12 results from an estimated 12 matches for "slavehost".

2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail. Please perform the following steps once a new node is added - Run gsec create command again gluster system:: execute gsec_create - Run Geo-rep create command with force and run start force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> create push-pem force gluster volume geo-replication <mastervol> <slavehost>::<slavevol> start force With these steps you will be able to stop/delete the Geo-rep session. I will add these steps in the documentation page(http://gluster.readthedocs.io...
2024 Aug 30
1
geo-rep will not initialize
...j/stuff, is seen by both and is not replicated into /gluster/n. > Also, check with the following command (found it in > https://access.redhat.com/solutions/2616791 ) > <https://access.redhat.com/solutions/2616791>: > |sh -x /usr/libexec/glusterfs/gverify.sh masterVol slaveUser slaveHost > slaveVol sshPort logFileName| That must be the wrong URL, "libexec" doesn't appear there. However, running it with locally-appropriate args: /usr/libexec/glusterfs/gverify.sh j geoacct pms n 6427 /tmp/verify.log ...generates a great deal of regular logging output, logs nothing...
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that while removing brick and adding a brick, you should make sure the data from the brick being removed is synced to slave. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 4:21 PM, Stefano Bagnara <lists at bago.o...
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output. There might be cases where the mount points not get clean...
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Apr 23
0
Geo-replication faulty
...x (2 + 1) = 3 After checking logs I see that the master node has the following error: OSError: Permission denied Looking at the slave I have the following error: remote operation failed. Path: <gfid:7c6a232d-c74c-40d7-b6bd-c092fc1169f7>/anvil [Permission denied] I restarted glusterd on all slavehosts. After this I got new errors. Master node: RepceClient: call failed on peer call=26487:140016890697536:1524473494.25 method=entry_ops error=OSError glusterfs session went down error=ENOTCONN Client node: Found anomalies in (null) (gfid = 982d5d7d-2a53-4b21-8ad7-d658810d554c...
2024 Sep 01
1
geo-rep will not initialize
...ld not start simply because there was nothing to do. But yes, there are a few files created after the geo-rep session was created. And status remains just "Created." > Can you share the output of your version of? 'sh -x > /usr/libexec/glusterfs/gverify.sh masterVol slaveUser slaveHost > slaveVol sshPort logFileName' check ? sh -x /usr/libexec/glusterfs/gverify.sh j geoacct pms n 7887 /tmp/logger + BUFFER_SIZE=104857600 + SSH_PORT=7887 ++ gluster --print-logdir + primary_log_file=/var/log/glusterfs/geo-replication/gverify-primarymnt.log ++ gluster --print-logdir + secon...
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Feb 07
1
geo-replication command rsync returned with 3
...at.com> Cc:?"Gluster Users" <gluster-users at gluster.org> Betreff:?Re: [Gluster-users] geo-replication command rsync returned with 3 Hi, ?As a quick workaround for geo-replication to work. Please configure the following option. ?gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true ?The above option will not do the lazy umount and as a result, all the master and slave volume mountsmaintained by geo-replication can be accessed by others. It's also visible in df output.There might be cases where the mount points not get cleaned...